id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
d824f837d8bc17f399e9b8ce8b30795944df0d51 | d824f837d8bc17f399e9b8ce8b30795944df0d51_0 | Q: How do they show their model discovers underlying syntactic structure?
Text: Introduction
Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BIBREF0 . To generate a proper sentence, tokens are put together with a specific syntactic structure. Understanding a sentence also requires lexical information to provide meanings, and syntactical knowledge to correctly combine meanings. Current neural language models can provide meaningful word represent BIBREF1 , BIBREF2 , BIBREF3 . However, standard recurrent neural networks only implicitly model syntax, thus fail to efficiently use structure information BIBREF4 .
Developing a deep neural network that can leverage syntactic knowledge to form a better semantic representation has received a great deal of attention in recent years BIBREF5 , BIBREF4 , BIBREF6 . Integrating syntactic structure into a language model is important for different reasons: 1) to obtain a hierarchical representation with increasing levels of abstraction, which is a key feature of deep neural networks and of the human brain BIBREF7 , BIBREF8 , BIBREF9 ; 2) to capture complex linguistic phenomena, like long-term dependency problem BIBREF4 and the compositional effects BIBREF5 ; 3) to provide shortcut for gradient back-propagation BIBREF6 .
A syntactic parser is the most common source for structure information. Supervised parsers can achieve very high performance on well constructed sentences. Hence, parsers can provide accurate information about how to compose word semantics into sentence semantics BIBREF5 , or how to generate the next word given previous words BIBREF10 . However, only major languages have treebank data for training parsers, and it request expensive human expert annotation. People also tend to break language rules in many circumstances (such as writing a tweet). These defects limit the generalization capability of supervised parsers.
Unsupervised syntactic structure induction has been among the longstanding challenges of computational linguistic BIBREF11 , BIBREF12 , BIBREF13 . Researchers are interested in this problem for a variety of reasons: to be able to parse languages for which no annotated treebanks exist BIBREF14 ; to create a dependency structure to better suit a particular NLP application BIBREF10 ; to empirically argue for or against the poverty of the stimulus BIBREF15 , BIBREF16 ; and to examine cognitive issues in language learning BIBREF17 .
In this paper, we propose a novel neural language model: Parsing-Reading-Predict Networks (PRPN), which can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to form a better language model. With our model, we assume that language can be naturally represented as a tree-structured graph. The model is composed of three parts:
We evaluate our model on three tasks: word-level language modeling, character-level language modeling, and unsupervised constituency parsing. The proposed model achieves (or is close to) the state-of-the-art on both word-level and character-level language modeling. The model's unsupervised parsing outperforms some strong baseline models, demonstrating that the structure found by our model is similar to the intrinsic structure provided by human experts.
Related Work
The idea of introducing some structures, especially trees, into language understanding to help a downstream task has been explored in various ways. For example, BIBREF5 , BIBREF4 learn a bottom-up encoder, taking as an input a parse tree supplied from an external parser. There are models that are able to infer a tree during test time, while still need supervised signal on tree structure during training. For example, BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , etc. Moreover, BIBREF22 did an in-depth analysis of recursive models that are able to learn tree structure without being exposed to any grammar trees. Our model is also able to infer tree structure in an unsupervised setting, but different from theirs, it is a recurrent network that implicitly models tree structure through attention.
Apart from the approach of using recursive networks to capture structures, there is another line of research which try to learn recurrent features at multiple scales, which can be dated back to 1990s (e.g. BIBREF23 , BIBREF24 , BIBREF25 ). The NARX RNN BIBREF25 is another example which used a feed forward net taking different inputs with predefined time delays to model long-term dependencies. More recently, BIBREF26 also used multiple layers of recurrent networks with different pre-defined updating frequencies. Instead, our model tries to learn the structure from data, rather than predefining it. In that respect, BIBREF6 relates to our model since it proposes a hierarchical multi-scale structure with binary gates controlling intra-layer connections, and the gating mechanism is learned from data too. The difference is that their gating mechanism controls the updates of higher layers directly, while ours control it softly through an attention mechanism.
In terms of language modeling, syntactic language modeling can be dated back to BIBREF27 . BIBREF28 , BIBREF29 have also proposed language models with a top-down parsing mechanism. Recently BIBREF30 , BIBREF31 have introduced neural networks into this space. It learns both a discriminative and a generative model with top-down parsing, trained with a supervision signal from parsed sentences in the corpus. There are also dependency-based approaches using neural networks, including BIBREF32 , BIBREF33 , BIBREF34 .
Parsers are also related to our work since they are all inferring grammatical tree structure given a sentence. For example, SPINN BIBREF35 is a shift-reduce parser that uses an LSTM as its composition function. The transition classifier in SPINN is supervisedly trained on the Stanford PCFG Parser BIBREF36 output. Unsupervised parsers are more aligned with what our model is doing. BIBREF12 presented a generative model for the unsupervised learning of dependency structures. BIBREF11 is a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. We compare our parsing quality with the aforementioned two papers in Section SECREF43 .
Motivation
Suppose we have a sequence of tokens INLINEFORM0 governed by the tree structure showed in Figure FIGREF4 . The leafs INLINEFORM1 are observed tokens. Node INLINEFORM2 represents the meaning of the constituent formed by its leaves INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 stands for the leftmost child and right most child. Root INLINEFORM6 represents the meaning of the whole sequence. Arrows represent the dependency relations between nodes. The underlying assumption is that each node depends only on its parent and its left siblings.
Directly modeling the tree structure is a challenging task, usually requiring supervision to learn BIBREF4 . In addition, relying on tree structures can result in a model that is not sufficiently robust to face ungrammatical sentences BIBREF37 . In contrast, recurrent models provide a convenient way to model sequential data, with the current hidden state only depends on the last hidden state. This makes models more robust when facing nonconforming sequential data, but it suffers from neglecting the real dependency relation that dominates the structure of natural language sentences.
In this paper, we use skip-connection to integrate structured dependency relations with recurrent neural network. In other words, the current hidden state does not only depend on the last hidden state, but also on previous hidden states that have a direct syntactic relation to the current one.
Figure FIGREF5 shows the structure of our model. The non-leaf node INLINEFORM0 is represented by a set of hidden states INLINEFORM1 , where INLINEFORM2 is the left most descendant leaf and INLINEFORM3 is the right most one. Arrows shows skip connections built by our model according to the latent structure. Skip connections are controlled by gates INLINEFORM4 . In order to define INLINEFORM5 , we introduce a latent variable INLINEFORM6 to represent local structural context of INLINEFORM7 :
and gates are defined as: DISPLAYFORM0
Given this architecture, the siblings dependency relation is modeled by at least one skip-connect. The skip connection will directly feed information forward, and pass gradient backward. The parent-to-child relation will be implicitly modeled by skip-connect relation between nodes.
The model recurrently updates the hidden states according to: DISPLAYFORM0
and the probability distribution for next word is approximated by: DISPLAYFORM0
where INLINEFORM0 are gates that control skip-connections. Both INLINEFORM1 and INLINEFORM2 have a structured attention mechanism that takes INLINEFORM3 as input and forces the model to focus on the most related information. Since INLINEFORM4 is an unobserved latent variable, We explain an approximation for INLINEFORM5 in the next section. The structured attention mechanism is explained in section SECREF21 .
Modeling Local Structure
In this section we give a probabilistic view on how to model the local structure of language. A detailed elaboration for this section is given in Appendix . At time step INLINEFORM0 , INLINEFORM1 represents the probability of choosing one out of INLINEFORM2 possible local structures. We propose to model the distribution by the Stick-Breaking Process: DISPLAYFORM0
The formula can be understood by noting that after the time step INLINEFORM0 have their probabilities assigned, INLINEFORM1 is remaining probability, INLINEFORM2 is the portion of remaining probability that we assign to time step INLINEFORM3 . Variable INLINEFORM4 is parametrized in the next section.
As shown in Appendix , the expectation of gate value INLINEFORM0 is the Cumulative Distribution Function (CDF) of INLINEFORM1 . Thus, we can replace the discrete gate value by its expectation: DISPLAYFORM0
With these relaxations, Eq. EQREF9 and EQREF10 can be approximated by using a soft gating vector to update the hidden state and predict the next token.
Parsing Network
In Eq. EQREF12 , INLINEFORM0 is the portion of the remaining probability that we assign to position INLINEFORM1 . Because the stick-breaking process should assign high probability to INLINEFORM2 , which is the closest constituent-beginning word. The model should assign large INLINEFORM3 to words beginning new constituents. While INLINEFORM4 itself is a constituent-beginning word, the model should assign large INLINEFORM5 to words beginning larger constituents. In other words, the model will consider longer dependency relations for the first word in constituent. Given the sentence in Figure FIGREF4 , at time step INLINEFORM6 , both INLINEFORM7 and INLINEFORM8 should be close to 1, and all other INLINEFORM9 should be close to 0.
In order to parametrize INLINEFORM0 , our basic hypothesis is that words in the same constituent should have a closer syntactic relation within themselves, and that this syntactical proximity can be represented by a scalar value. From the tree structure point of view, the shortest path between leafs in same subtree is shorter than the one between leafs in different subtree.
To model syntactical proximity, we introduce a new feature Syntactic Distance. For a sentence with length INLINEFORM0 , we define a set of INLINEFORM1 real valued scalar variables INLINEFORM2 , with INLINEFORM3 representing a measure of the syntactic relation between the pair of adjacent words INLINEFORM4 . INLINEFORM5 could be the last word in previous sentence or a padding token. For time step INLINEFORM6 , we want to find the closest words INLINEFORM7 , that have larger syntactic distance than INLINEFORM8 . Thus INLINEFORM9 can be defined as: DISPLAYFORM0
where INLINEFORM0 . INLINEFORM1 is the temperature parameter that controls the sensitivity of INLINEFORM2 to the differences between distances.
The Syntactic Distance has some nice properties that both allow us to infer a tree structure from it and be robust to intermediate non-valid tree structures that the model may encounter during learning. In Appendix and we list these properties and further explain the meanings of their values.
BIBREF38 shows that it's possible to identify the beginning and ending words of a constituent using local information. In our model, the syntactic distance between a given token (which is usually represented as a vector word embedding INLINEFORM0 ) and its previous token INLINEFORM1 , is provided by a convolutional kernel over a set of consecutive previous tokens INLINEFORM2 . This convolution is depicted as the gray triangles shown in Figure FIGREF20 . Each triangle here represent 2 layers of convolution. Formally, the syntactic distance INLINEFORM3 between token INLINEFORM4 and INLINEFORM5 is computed by DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 are the kernel parameters. INLINEFORM2 and INLINEFORM3 can be seen as another convolutional kernel with window size 1, convolved over INLINEFORM4 's. Here the kernel window size INLINEFORM5 determines how far back into the history node INLINEFORM6 can reach while computing its syntactic distance INLINEFORM7 . Thus we call it the look-back range.
Convolving INLINEFORM0 and INLINEFORM1 on the whole sequence with length INLINEFORM2 yields a set of distances. For the tokens in the beginning of the sequence, we simply pad INLINEFORM3 zero vectors to the front of the sequence in order to get INLINEFORM4 outputs.
Reading Network
The Reading Network generate new states INLINEFORM0 considering on input INLINEFORM1 , previous memory states INLINEFORM2 , and gates INLINEFORM3 , as shown in Eq. EQREF9 .
Similar to Long Short-Term Memory-Network (LSTMN) BIBREF39 , the Reading Network maintains the memory states by maintaining two sets of vectors: a hidden tape INLINEFORM0 , and a memory tape INLINEFORM1 , where INLINEFORM2 is the upper bound for the memory span. Hidden states INLINEFORM3 is now represented by a tuple of two vectors INLINEFORM4 . The Reading Network captures the dependency relation by a modified attention mechanism: structured attention. At each step of recurrence, the model summarizes the previous recurrent states via the structured attention mechanism, then performs a normal LSTM update, with hidden and cell states output by the attention mechanism.
At each time step INLINEFORM0 , the read operation attentively links the current token to previous memories with a structured attention layer: DISPLAYFORM0
where, INLINEFORM0 is the dimension of the hidden state. Modulated by the gates in Eq. EQREF13 , the structured intra-attention weight is defined as: DISPLAYFORM0
This yields a probability distribution over the hidden state vectors of previous tokens. We can then compute an adaptive summary vector for the previous hidden tape and memory denoting by INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0
Structured attention provides a way to model the dependency relations shown in Figure FIGREF4 .
The Reading Network takes INLINEFORM0 , INLINEFORM1 and INLINEFORM2 as input, computes the values of INLINEFORM3 and INLINEFORM4 by the LSTM recurrent update BIBREF40 . Then the write operation concatenates INLINEFORM5 and INLINEFORM6 to the end of hidden and memory tape.
Predict Network
Predict Network models the probability distribution of next word INLINEFORM0 , considering on hidden states INLINEFORM1 , and gates INLINEFORM2 . Note that, at time step INLINEFORM3 , the model cannot observe INLINEFORM4 , a temporary estimation of INLINEFORM5 is computed considering on INLINEFORM6 : DISPLAYFORM0
From there we compute its corresponding INLINEFORM0 and INLINEFORM1 for Eq. EQREF10 . We parametrize INLINEFORM2 function as: DISPLAYFORM0
where INLINEFORM0 is an adaptive summary of INLINEFORM1 , output by structured attention controlled by INLINEFORM2 . INLINEFORM3 could be a simple feed-forward MLP, or more complex architecture, like ResNet, to add more depth to the model.
Experiments
We evaluate the proposed model on three tasks, character-level language modeling, word-level language modeling, and unsupervised constituency parsing.
Character-level Language Model
From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.
When training, we use truncated back-propagation, and feed the final memory position from the previous batch as the initial memory of next one. At the beginning of training and test time, the model initial hidden states are filled with zero. Optimization is performed with Adam using learning rate INLINEFORM0 , weight decay INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . We carry out gradient clipping with maximum norm 1.0. The learning rate is multiplied by 0.1 whenever validation performance does not improve during 2 checkpoints. These checkpoints are performed at the end of each epoch. We also apply layer normalization BIBREF41 to the Reading Network and batch normalization to the Predict Network and parsing network. For all of the character-level language modeling experiments, we apply the same procedure, varying only the number of hidden units, mini-batch size and dropout rate.
we process the Penn Treebank dataset BIBREF42 by following the procedure introduced in BIBREF43 . For character-level PTB, Reading Network has two recurrent layers, Predict Network has one residual block. Hidden state size is 1024 units. The input and output embedding size are 128, and not shared. Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 , upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 100 timesteps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0, 0.25, 0.1) respectively.
In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. In other words, if the model sees a space, it will attend on all previous step. If the model sees a letter, it will attend no further then the last space step. The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data. As a result our model achieve state-of-the-art performance and significantly outperform baseline models. It is worth noting that HM-LSTM BIBREF6 also unsupervisedly induce similar structure from data. But discrete operations in HM-LSTM make their training procedure more complicated then ours.
Word-level Language Model
Comparing to character-level language modeling, word-level language modeling needs to deal with complex syntactic structure and various linguistic phenomena. But it has less long-term dependencies. We evaluate the word-level variant of our language model on a preprocessed version of the Penn Treebank (PTB) BIBREF42 and Text8 BIBREF49 dataset.
We apply the same procedure and hyper-parameters as in character-level language model. Except optimization is performed with Adam with INLINEFORM0 . This turns off the exponential moving average for estimates of the means of the gradients BIBREF50 . We also adapt the number of hidden units, mini-batch size and the dropout rate according to the different tasks.
we process the Penn Treebank dataset BIBREF43 by following the procedure introduced in BIBREF51 . For word-level PTB, the Reading Network has two recurrent layers and the Predict Network do not have residual block. The hidden state size is 1200 units and the input and output embedding sizes are 800, and shared BIBREF52 , BIBREF53 . Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 and the upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 35 time-steps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0.7, 0.5, 0.5) respectively.
dataset contains 17M training tokens and has a vocabulary size of 44k words. The dataset is partitioned into a training set (first 99M characters) and a development set (last 1M characters) that is used to report performance. As this dataset contains various articles from Wikipedia, the longer term information (such as current topic) plays a bigger role than in the PTB experiments BIBREF61 . We apply the same procedure and hyper-parameters as in character-level PTB, except we use a batch size of 128. The values used of dropout on input/output embeddings, between Recurrent Layers and on recurrent states were (0.4, 0.2, 0.2) respectively.
In Table TABREF39 , our results are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BIBREF50 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table TABREF42 , our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix . In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention.
Unsupervised Constituency Parsing
The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 ( INLINEFORM0 ) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section SECREF14 , our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BIBREF11 , BIBREF13 . Our model is compared with the several baseline methods, that are explained in Appendix .
Different from the previous experiment setting, the model treat each sentence independently during train and test time. When training, we feed one batch of sentences at each iteration. In a batch, shorter sentences are padded with 0. At the beginning of the iteration, the model's initial hidden states are filled with zero. When testing, we feed on sentence one by one to the model, then use the gate value output by the model to recursively combine tokens into constituents, as described in Appendix .
Table TABREF44 summarizes the results. Our model significantly outperform the RANDOM baseline indicate a high consistency with human annotation. Our model also shows a comparable performance with CCM model. In fact our parsing network and CCM both focus on the relation between successive tokens. As described in Section SECREF14 , our model computes syntactic distance between all successive pair of tokens, then our parsing algorithm recursively assemble tokens into constituents according to the learned distance. CCM also recursively model the probability whether a contiguous subsequences of a sentence is a constituent. Thus, one can understand how our model is outperformed by DMV+CCM and UML-DOP models. The DMV+CCM model has extra information from a dependency parser. The UML-DOP approach captures both contiguous and non-contiguous lexical dependencies BIBREF13 .
Conclusion
In this paper, we propose a novel neural language model that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. We introduce a new neural parsing network: Parsing-Reading-Predict Network, that can make differentiable parsing decisions. We use a new structured attention mechanism to control skip connections in a recurrent neural network. Hence induced syntactic structure information can be used to improve the model's performance. Via this mechanism, the gradient can be directly back-propagated from the language model loss function into the neural Parsing Network. The proposed model achieve (or is close to) the state-of-the-art on both word/character-level language modeling tasks. Experiment also shows that the inferred syntactic structure highly correlated to human expert annotation.
Acknowledgement
The authors would like to thank Timothy J. O'Donnell and Chris Dyer for the helpful discussions. | By visualizing syntactic distance estimated by the parsing network |
2ff3898fbb5954aa82dd2f60b37dd303449c81ba | 2ff3898fbb5954aa82dd2f60b37dd303449c81ba_0 | Q: Which dataset do they experiment with?
Text: Introduction
Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BIBREF0 . To generate a proper sentence, tokens are put together with a specific syntactic structure. Understanding a sentence also requires lexical information to provide meanings, and syntactical knowledge to correctly combine meanings. Current neural language models can provide meaningful word represent BIBREF1 , BIBREF2 , BIBREF3 . However, standard recurrent neural networks only implicitly model syntax, thus fail to efficiently use structure information BIBREF4 .
Developing a deep neural network that can leverage syntactic knowledge to form a better semantic representation has received a great deal of attention in recent years BIBREF5 , BIBREF4 , BIBREF6 . Integrating syntactic structure into a language model is important for different reasons: 1) to obtain a hierarchical representation with increasing levels of abstraction, which is a key feature of deep neural networks and of the human brain BIBREF7 , BIBREF8 , BIBREF9 ; 2) to capture complex linguistic phenomena, like long-term dependency problem BIBREF4 and the compositional effects BIBREF5 ; 3) to provide shortcut for gradient back-propagation BIBREF6 .
A syntactic parser is the most common source for structure information. Supervised parsers can achieve very high performance on well constructed sentences. Hence, parsers can provide accurate information about how to compose word semantics into sentence semantics BIBREF5 , or how to generate the next word given previous words BIBREF10 . However, only major languages have treebank data for training parsers, and it request expensive human expert annotation. People also tend to break language rules in many circumstances (such as writing a tweet). These defects limit the generalization capability of supervised parsers.
Unsupervised syntactic structure induction has been among the longstanding challenges of computational linguistic BIBREF11 , BIBREF12 , BIBREF13 . Researchers are interested in this problem for a variety of reasons: to be able to parse languages for which no annotated treebanks exist BIBREF14 ; to create a dependency structure to better suit a particular NLP application BIBREF10 ; to empirically argue for or against the poverty of the stimulus BIBREF15 , BIBREF16 ; and to examine cognitive issues in language learning BIBREF17 .
In this paper, we propose a novel neural language model: Parsing-Reading-Predict Networks (PRPN), which can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to form a better language model. With our model, we assume that language can be naturally represented as a tree-structured graph. The model is composed of three parts:
We evaluate our model on three tasks: word-level language modeling, character-level language modeling, and unsupervised constituency parsing. The proposed model achieves (or is close to) the state-of-the-art on both word-level and character-level language modeling. The model's unsupervised parsing outperforms some strong baseline models, demonstrating that the structure found by our model is similar to the intrinsic structure provided by human experts.
Related Work
The idea of introducing some structures, especially trees, into language understanding to help a downstream task has been explored in various ways. For example, BIBREF5 , BIBREF4 learn a bottom-up encoder, taking as an input a parse tree supplied from an external parser. There are models that are able to infer a tree during test time, while still need supervised signal on tree structure during training. For example, BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , etc. Moreover, BIBREF22 did an in-depth analysis of recursive models that are able to learn tree structure without being exposed to any grammar trees. Our model is also able to infer tree structure in an unsupervised setting, but different from theirs, it is a recurrent network that implicitly models tree structure through attention.
Apart from the approach of using recursive networks to capture structures, there is another line of research which try to learn recurrent features at multiple scales, which can be dated back to 1990s (e.g. BIBREF23 , BIBREF24 , BIBREF25 ). The NARX RNN BIBREF25 is another example which used a feed forward net taking different inputs with predefined time delays to model long-term dependencies. More recently, BIBREF26 also used multiple layers of recurrent networks with different pre-defined updating frequencies. Instead, our model tries to learn the structure from data, rather than predefining it. In that respect, BIBREF6 relates to our model since it proposes a hierarchical multi-scale structure with binary gates controlling intra-layer connections, and the gating mechanism is learned from data too. The difference is that their gating mechanism controls the updates of higher layers directly, while ours control it softly through an attention mechanism.
In terms of language modeling, syntactic language modeling can be dated back to BIBREF27 . BIBREF28 , BIBREF29 have also proposed language models with a top-down parsing mechanism. Recently BIBREF30 , BIBREF31 have introduced neural networks into this space. It learns both a discriminative and a generative model with top-down parsing, trained with a supervision signal from parsed sentences in the corpus. There are also dependency-based approaches using neural networks, including BIBREF32 , BIBREF33 , BIBREF34 .
Parsers are also related to our work since they are all inferring grammatical tree structure given a sentence. For example, SPINN BIBREF35 is a shift-reduce parser that uses an LSTM as its composition function. The transition classifier in SPINN is supervisedly trained on the Stanford PCFG Parser BIBREF36 output. Unsupervised parsers are more aligned with what our model is doing. BIBREF12 presented a generative model for the unsupervised learning of dependency structures. BIBREF11 is a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. We compare our parsing quality with the aforementioned two papers in Section SECREF43 .
Motivation
Suppose we have a sequence of tokens INLINEFORM0 governed by the tree structure showed in Figure FIGREF4 . The leafs INLINEFORM1 are observed tokens. Node INLINEFORM2 represents the meaning of the constituent formed by its leaves INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 stands for the leftmost child and right most child. Root INLINEFORM6 represents the meaning of the whole sequence. Arrows represent the dependency relations between nodes. The underlying assumption is that each node depends only on its parent and its left siblings.
Directly modeling the tree structure is a challenging task, usually requiring supervision to learn BIBREF4 . In addition, relying on tree structures can result in a model that is not sufficiently robust to face ungrammatical sentences BIBREF37 . In contrast, recurrent models provide a convenient way to model sequential data, with the current hidden state only depends on the last hidden state. This makes models more robust when facing nonconforming sequential data, but it suffers from neglecting the real dependency relation that dominates the structure of natural language sentences.
In this paper, we use skip-connection to integrate structured dependency relations with recurrent neural network. In other words, the current hidden state does not only depend on the last hidden state, but also on previous hidden states that have a direct syntactic relation to the current one.
Figure FIGREF5 shows the structure of our model. The non-leaf node INLINEFORM0 is represented by a set of hidden states INLINEFORM1 , where INLINEFORM2 is the left most descendant leaf and INLINEFORM3 is the right most one. Arrows shows skip connections built by our model according to the latent structure. Skip connections are controlled by gates INLINEFORM4 . In order to define INLINEFORM5 , we introduce a latent variable INLINEFORM6 to represent local structural context of INLINEFORM7 :
and gates are defined as: DISPLAYFORM0
Given this architecture, the siblings dependency relation is modeled by at least one skip-connect. The skip connection will directly feed information forward, and pass gradient backward. The parent-to-child relation will be implicitly modeled by skip-connect relation between nodes.
The model recurrently updates the hidden states according to: DISPLAYFORM0
and the probability distribution for next word is approximated by: DISPLAYFORM0
where INLINEFORM0 are gates that control skip-connections. Both INLINEFORM1 and INLINEFORM2 have a structured attention mechanism that takes INLINEFORM3 as input and forces the model to focus on the most related information. Since INLINEFORM4 is an unobserved latent variable, We explain an approximation for INLINEFORM5 in the next section. The structured attention mechanism is explained in section SECREF21 .
Modeling Local Structure
In this section we give a probabilistic view on how to model the local structure of language. A detailed elaboration for this section is given in Appendix . At time step INLINEFORM0 , INLINEFORM1 represents the probability of choosing one out of INLINEFORM2 possible local structures. We propose to model the distribution by the Stick-Breaking Process: DISPLAYFORM0
The formula can be understood by noting that after the time step INLINEFORM0 have their probabilities assigned, INLINEFORM1 is remaining probability, INLINEFORM2 is the portion of remaining probability that we assign to time step INLINEFORM3 . Variable INLINEFORM4 is parametrized in the next section.
As shown in Appendix , the expectation of gate value INLINEFORM0 is the Cumulative Distribution Function (CDF) of INLINEFORM1 . Thus, we can replace the discrete gate value by its expectation: DISPLAYFORM0
With these relaxations, Eq. EQREF9 and EQREF10 can be approximated by using a soft gating vector to update the hidden state and predict the next token.
Parsing Network
In Eq. EQREF12 , INLINEFORM0 is the portion of the remaining probability that we assign to position INLINEFORM1 . Because the stick-breaking process should assign high probability to INLINEFORM2 , which is the closest constituent-beginning word. The model should assign large INLINEFORM3 to words beginning new constituents. While INLINEFORM4 itself is a constituent-beginning word, the model should assign large INLINEFORM5 to words beginning larger constituents. In other words, the model will consider longer dependency relations for the first word in constituent. Given the sentence in Figure FIGREF4 , at time step INLINEFORM6 , both INLINEFORM7 and INLINEFORM8 should be close to 1, and all other INLINEFORM9 should be close to 0.
In order to parametrize INLINEFORM0 , our basic hypothesis is that words in the same constituent should have a closer syntactic relation within themselves, and that this syntactical proximity can be represented by a scalar value. From the tree structure point of view, the shortest path between leafs in same subtree is shorter than the one between leafs in different subtree.
To model syntactical proximity, we introduce a new feature Syntactic Distance. For a sentence with length INLINEFORM0 , we define a set of INLINEFORM1 real valued scalar variables INLINEFORM2 , with INLINEFORM3 representing a measure of the syntactic relation between the pair of adjacent words INLINEFORM4 . INLINEFORM5 could be the last word in previous sentence or a padding token. For time step INLINEFORM6 , we want to find the closest words INLINEFORM7 , that have larger syntactic distance than INLINEFORM8 . Thus INLINEFORM9 can be defined as: DISPLAYFORM0
where INLINEFORM0 . INLINEFORM1 is the temperature parameter that controls the sensitivity of INLINEFORM2 to the differences between distances.
The Syntactic Distance has some nice properties that both allow us to infer a tree structure from it and be robust to intermediate non-valid tree structures that the model may encounter during learning. In Appendix and we list these properties and further explain the meanings of their values.
BIBREF38 shows that it's possible to identify the beginning and ending words of a constituent using local information. In our model, the syntactic distance between a given token (which is usually represented as a vector word embedding INLINEFORM0 ) and its previous token INLINEFORM1 , is provided by a convolutional kernel over a set of consecutive previous tokens INLINEFORM2 . This convolution is depicted as the gray triangles shown in Figure FIGREF20 . Each triangle here represent 2 layers of convolution. Formally, the syntactic distance INLINEFORM3 between token INLINEFORM4 and INLINEFORM5 is computed by DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 are the kernel parameters. INLINEFORM2 and INLINEFORM3 can be seen as another convolutional kernel with window size 1, convolved over INLINEFORM4 's. Here the kernel window size INLINEFORM5 determines how far back into the history node INLINEFORM6 can reach while computing its syntactic distance INLINEFORM7 . Thus we call it the look-back range.
Convolving INLINEFORM0 and INLINEFORM1 on the whole sequence with length INLINEFORM2 yields a set of distances. For the tokens in the beginning of the sequence, we simply pad INLINEFORM3 zero vectors to the front of the sequence in order to get INLINEFORM4 outputs.
Reading Network
The Reading Network generate new states INLINEFORM0 considering on input INLINEFORM1 , previous memory states INLINEFORM2 , and gates INLINEFORM3 , as shown in Eq. EQREF9 .
Similar to Long Short-Term Memory-Network (LSTMN) BIBREF39 , the Reading Network maintains the memory states by maintaining two sets of vectors: a hidden tape INLINEFORM0 , and a memory tape INLINEFORM1 , where INLINEFORM2 is the upper bound for the memory span. Hidden states INLINEFORM3 is now represented by a tuple of two vectors INLINEFORM4 . The Reading Network captures the dependency relation by a modified attention mechanism: structured attention. At each step of recurrence, the model summarizes the previous recurrent states via the structured attention mechanism, then performs a normal LSTM update, with hidden and cell states output by the attention mechanism.
At each time step INLINEFORM0 , the read operation attentively links the current token to previous memories with a structured attention layer: DISPLAYFORM0
where, INLINEFORM0 is the dimension of the hidden state. Modulated by the gates in Eq. EQREF13 , the structured intra-attention weight is defined as: DISPLAYFORM0
This yields a probability distribution over the hidden state vectors of previous tokens. We can then compute an adaptive summary vector for the previous hidden tape and memory denoting by INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0
Structured attention provides a way to model the dependency relations shown in Figure FIGREF4 .
The Reading Network takes INLINEFORM0 , INLINEFORM1 and INLINEFORM2 as input, computes the values of INLINEFORM3 and INLINEFORM4 by the LSTM recurrent update BIBREF40 . Then the write operation concatenates INLINEFORM5 and INLINEFORM6 to the end of hidden and memory tape.
Predict Network
Predict Network models the probability distribution of next word INLINEFORM0 , considering on hidden states INLINEFORM1 , and gates INLINEFORM2 . Note that, at time step INLINEFORM3 , the model cannot observe INLINEFORM4 , a temporary estimation of INLINEFORM5 is computed considering on INLINEFORM6 : DISPLAYFORM0
From there we compute its corresponding INLINEFORM0 and INLINEFORM1 for Eq. EQREF10 . We parametrize INLINEFORM2 function as: DISPLAYFORM0
where INLINEFORM0 is an adaptive summary of INLINEFORM1 , output by structured attention controlled by INLINEFORM2 . INLINEFORM3 could be a simple feed-forward MLP, or more complex architecture, like ResNet, to add more depth to the model.
Experiments
We evaluate the proposed model on three tasks, character-level language modeling, word-level language modeling, and unsupervised constituency parsing.
Character-level Language Model
From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.
When training, we use truncated back-propagation, and feed the final memory position from the previous batch as the initial memory of next one. At the beginning of training and test time, the model initial hidden states are filled with zero. Optimization is performed with Adam using learning rate INLINEFORM0 , weight decay INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . We carry out gradient clipping with maximum norm 1.0. The learning rate is multiplied by 0.1 whenever validation performance does not improve during 2 checkpoints. These checkpoints are performed at the end of each epoch. We also apply layer normalization BIBREF41 to the Reading Network and batch normalization to the Predict Network and parsing network. For all of the character-level language modeling experiments, we apply the same procedure, varying only the number of hidden units, mini-batch size and dropout rate.
we process the Penn Treebank dataset BIBREF42 by following the procedure introduced in BIBREF43 . For character-level PTB, Reading Network has two recurrent layers, Predict Network has one residual block. Hidden state size is 1024 units. The input and output embedding size are 128, and not shared. Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 , upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 100 timesteps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0, 0.25, 0.1) respectively.
In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. In other words, if the model sees a space, it will attend on all previous step. If the model sees a letter, it will attend no further then the last space step. The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data. As a result our model achieve state-of-the-art performance and significantly outperform baseline models. It is worth noting that HM-LSTM BIBREF6 also unsupervisedly induce similar structure from data. But discrete operations in HM-LSTM make their training procedure more complicated then ours.
Word-level Language Model
Comparing to character-level language modeling, word-level language modeling needs to deal with complex syntactic structure and various linguistic phenomena. But it has less long-term dependencies. We evaluate the word-level variant of our language model on a preprocessed version of the Penn Treebank (PTB) BIBREF42 and Text8 BIBREF49 dataset.
We apply the same procedure and hyper-parameters as in character-level language model. Except optimization is performed with Adam with INLINEFORM0 . This turns off the exponential moving average for estimates of the means of the gradients BIBREF50 . We also adapt the number of hidden units, mini-batch size and the dropout rate according to the different tasks.
we process the Penn Treebank dataset BIBREF43 by following the procedure introduced in BIBREF51 . For word-level PTB, the Reading Network has two recurrent layers and the Predict Network do not have residual block. The hidden state size is 1200 units and the input and output embedding sizes are 800, and shared BIBREF52 , BIBREF53 . Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 and the upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 35 time-steps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0.7, 0.5, 0.5) respectively.
dataset contains 17M training tokens and has a vocabulary size of 44k words. The dataset is partitioned into a training set (first 99M characters) and a development set (last 1M characters) that is used to report performance. As this dataset contains various articles from Wikipedia, the longer term information (such as current topic) plays a bigger role than in the PTB experiments BIBREF61 . We apply the same procedure and hyper-parameters as in character-level PTB, except we use a batch size of 128. The values used of dropout on input/output embeddings, between Recurrent Layers and on recurrent states were (0.4, 0.2, 0.2) respectively.
In Table TABREF39 , our results are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BIBREF50 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table TABREF42 , our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix . In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention.
Unsupervised Constituency Parsing
The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 ( INLINEFORM0 ) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section SECREF14 , our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BIBREF11 , BIBREF13 . Our model is compared with the several baseline methods, that are explained in Appendix .
Different from the previous experiment setting, the model treat each sentence independently during train and test time. When training, we feed one batch of sentences at each iteration. In a batch, shorter sentences are padded with 0. At the beginning of the iteration, the model's initial hidden states are filled with zero. When testing, we feed on sentence one by one to the model, then use the gate value output by the model to recursively combine tokens into constituents, as described in Appendix .
Table TABREF44 summarizes the results. Our model significantly outperform the RANDOM baseline indicate a high consistency with human annotation. Our model also shows a comparable performance with CCM model. In fact our parsing network and CCM both focus on the relation between successive tokens. As described in Section SECREF14 , our model computes syntactic distance between all successive pair of tokens, then our parsing algorithm recursively assemble tokens into constituents according to the learned distance. CCM also recursively model the probability whether a contiguous subsequences of a sentence is a constituent. Thus, one can understand how our model is outperformed by DMV+CCM and UML-DOP models. The DMV+CCM model has extra information from a dependency parser. The UML-DOP approach captures both contiguous and non-contiguous lexical dependencies BIBREF13 .
Conclusion
In this paper, we propose a novel neural language model that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. We introduce a new neural parsing network: Parsing-Reading-Predict Network, that can make differentiable parsing decisions. We use a new structured attention mechanism to control skip connections in a recurrent neural network. Hence induced syntactic structure information can be used to improve the model's performance. Via this mechanism, the gradient can be directly back-propagated from the language model loss function into the neural Parsing Network. The proposed model achieve (or is close to) the state-of-the-art on both word/character-level language modeling tasks. Experiment also shows that the inferred syntactic structure highly correlated to human expert annotation.
Acknowledgement
The authors would like to thank Timothy J. O'Donnell and Chris Dyer for the helpful discussions. | Penn Treebank, Text8, WSJ10 |
3070d6d6a52aa070f0c0a7b4de8abddd3da4f056 | 3070d6d6a52aa070f0c0a7b4de8abddd3da4f056_0 | Q: How do they measure performance of language model tasks?
Text: Introduction
Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BIBREF0 . To generate a proper sentence, tokens are put together with a specific syntactic structure. Understanding a sentence also requires lexical information to provide meanings, and syntactical knowledge to correctly combine meanings. Current neural language models can provide meaningful word represent BIBREF1 , BIBREF2 , BIBREF3 . However, standard recurrent neural networks only implicitly model syntax, thus fail to efficiently use structure information BIBREF4 .
Developing a deep neural network that can leverage syntactic knowledge to form a better semantic representation has received a great deal of attention in recent years BIBREF5 , BIBREF4 , BIBREF6 . Integrating syntactic structure into a language model is important for different reasons: 1) to obtain a hierarchical representation with increasing levels of abstraction, which is a key feature of deep neural networks and of the human brain BIBREF7 , BIBREF8 , BIBREF9 ; 2) to capture complex linguistic phenomena, like long-term dependency problem BIBREF4 and the compositional effects BIBREF5 ; 3) to provide shortcut for gradient back-propagation BIBREF6 .
A syntactic parser is the most common source for structure information. Supervised parsers can achieve very high performance on well constructed sentences. Hence, parsers can provide accurate information about how to compose word semantics into sentence semantics BIBREF5 , or how to generate the next word given previous words BIBREF10 . However, only major languages have treebank data for training parsers, and it request expensive human expert annotation. People also tend to break language rules in many circumstances (such as writing a tweet). These defects limit the generalization capability of supervised parsers.
Unsupervised syntactic structure induction has been among the longstanding challenges of computational linguistic BIBREF11 , BIBREF12 , BIBREF13 . Researchers are interested in this problem for a variety of reasons: to be able to parse languages for which no annotated treebanks exist BIBREF14 ; to create a dependency structure to better suit a particular NLP application BIBREF10 ; to empirically argue for or against the poverty of the stimulus BIBREF15 , BIBREF16 ; and to examine cognitive issues in language learning BIBREF17 .
In this paper, we propose a novel neural language model: Parsing-Reading-Predict Networks (PRPN), which can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to form a better language model. With our model, we assume that language can be naturally represented as a tree-structured graph. The model is composed of three parts:
We evaluate our model on three tasks: word-level language modeling, character-level language modeling, and unsupervised constituency parsing. The proposed model achieves (or is close to) the state-of-the-art on both word-level and character-level language modeling. The model's unsupervised parsing outperforms some strong baseline models, demonstrating that the structure found by our model is similar to the intrinsic structure provided by human experts.
Related Work
The idea of introducing some structures, especially trees, into language understanding to help a downstream task has been explored in various ways. For example, BIBREF5 , BIBREF4 learn a bottom-up encoder, taking as an input a parse tree supplied from an external parser. There are models that are able to infer a tree during test time, while still need supervised signal on tree structure during training. For example, BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , etc. Moreover, BIBREF22 did an in-depth analysis of recursive models that are able to learn tree structure without being exposed to any grammar trees. Our model is also able to infer tree structure in an unsupervised setting, but different from theirs, it is a recurrent network that implicitly models tree structure through attention.
Apart from the approach of using recursive networks to capture structures, there is another line of research which try to learn recurrent features at multiple scales, which can be dated back to 1990s (e.g. BIBREF23 , BIBREF24 , BIBREF25 ). The NARX RNN BIBREF25 is another example which used a feed forward net taking different inputs with predefined time delays to model long-term dependencies. More recently, BIBREF26 also used multiple layers of recurrent networks with different pre-defined updating frequencies. Instead, our model tries to learn the structure from data, rather than predefining it. In that respect, BIBREF6 relates to our model since it proposes a hierarchical multi-scale structure with binary gates controlling intra-layer connections, and the gating mechanism is learned from data too. The difference is that their gating mechanism controls the updates of higher layers directly, while ours control it softly through an attention mechanism.
In terms of language modeling, syntactic language modeling can be dated back to BIBREF27 . BIBREF28 , BIBREF29 have also proposed language models with a top-down parsing mechanism. Recently BIBREF30 , BIBREF31 have introduced neural networks into this space. It learns both a discriminative and a generative model with top-down parsing, trained with a supervision signal from parsed sentences in the corpus. There are also dependency-based approaches using neural networks, including BIBREF32 , BIBREF33 , BIBREF34 .
Parsers are also related to our work since they are all inferring grammatical tree structure given a sentence. For example, SPINN BIBREF35 is a shift-reduce parser that uses an LSTM as its composition function. The transition classifier in SPINN is supervisedly trained on the Stanford PCFG Parser BIBREF36 output. Unsupervised parsers are more aligned with what our model is doing. BIBREF12 presented a generative model for the unsupervised learning of dependency structures. BIBREF11 is a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. We compare our parsing quality with the aforementioned two papers in Section SECREF43 .
Motivation
Suppose we have a sequence of tokens INLINEFORM0 governed by the tree structure showed in Figure FIGREF4 . The leafs INLINEFORM1 are observed tokens. Node INLINEFORM2 represents the meaning of the constituent formed by its leaves INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 stands for the leftmost child and right most child. Root INLINEFORM6 represents the meaning of the whole sequence. Arrows represent the dependency relations between nodes. The underlying assumption is that each node depends only on its parent and its left siblings.
Directly modeling the tree structure is a challenging task, usually requiring supervision to learn BIBREF4 . In addition, relying on tree structures can result in a model that is not sufficiently robust to face ungrammatical sentences BIBREF37 . In contrast, recurrent models provide a convenient way to model sequential data, with the current hidden state only depends on the last hidden state. This makes models more robust when facing nonconforming sequential data, but it suffers from neglecting the real dependency relation that dominates the structure of natural language sentences.
In this paper, we use skip-connection to integrate structured dependency relations with recurrent neural network. In other words, the current hidden state does not only depend on the last hidden state, but also on previous hidden states that have a direct syntactic relation to the current one.
Figure FIGREF5 shows the structure of our model. The non-leaf node INLINEFORM0 is represented by a set of hidden states INLINEFORM1 , where INLINEFORM2 is the left most descendant leaf and INLINEFORM3 is the right most one. Arrows shows skip connections built by our model according to the latent structure. Skip connections are controlled by gates INLINEFORM4 . In order to define INLINEFORM5 , we introduce a latent variable INLINEFORM6 to represent local structural context of INLINEFORM7 :
and gates are defined as: DISPLAYFORM0
Given this architecture, the siblings dependency relation is modeled by at least one skip-connect. The skip connection will directly feed information forward, and pass gradient backward. The parent-to-child relation will be implicitly modeled by skip-connect relation between nodes.
The model recurrently updates the hidden states according to: DISPLAYFORM0
and the probability distribution for next word is approximated by: DISPLAYFORM0
where INLINEFORM0 are gates that control skip-connections. Both INLINEFORM1 and INLINEFORM2 have a structured attention mechanism that takes INLINEFORM3 as input and forces the model to focus on the most related information. Since INLINEFORM4 is an unobserved latent variable, We explain an approximation for INLINEFORM5 in the next section. The structured attention mechanism is explained in section SECREF21 .
Modeling Local Structure
In this section we give a probabilistic view on how to model the local structure of language. A detailed elaboration for this section is given in Appendix . At time step INLINEFORM0 , INLINEFORM1 represents the probability of choosing one out of INLINEFORM2 possible local structures. We propose to model the distribution by the Stick-Breaking Process: DISPLAYFORM0
The formula can be understood by noting that after the time step INLINEFORM0 have their probabilities assigned, INLINEFORM1 is remaining probability, INLINEFORM2 is the portion of remaining probability that we assign to time step INLINEFORM3 . Variable INLINEFORM4 is parametrized in the next section.
As shown in Appendix , the expectation of gate value INLINEFORM0 is the Cumulative Distribution Function (CDF) of INLINEFORM1 . Thus, we can replace the discrete gate value by its expectation: DISPLAYFORM0
With these relaxations, Eq. EQREF9 and EQREF10 can be approximated by using a soft gating vector to update the hidden state and predict the next token.
Parsing Network
In Eq. EQREF12 , INLINEFORM0 is the portion of the remaining probability that we assign to position INLINEFORM1 . Because the stick-breaking process should assign high probability to INLINEFORM2 , which is the closest constituent-beginning word. The model should assign large INLINEFORM3 to words beginning new constituents. While INLINEFORM4 itself is a constituent-beginning word, the model should assign large INLINEFORM5 to words beginning larger constituents. In other words, the model will consider longer dependency relations for the first word in constituent. Given the sentence in Figure FIGREF4 , at time step INLINEFORM6 , both INLINEFORM7 and INLINEFORM8 should be close to 1, and all other INLINEFORM9 should be close to 0.
In order to parametrize INLINEFORM0 , our basic hypothesis is that words in the same constituent should have a closer syntactic relation within themselves, and that this syntactical proximity can be represented by a scalar value. From the tree structure point of view, the shortest path between leafs in same subtree is shorter than the one between leafs in different subtree.
To model syntactical proximity, we introduce a new feature Syntactic Distance. For a sentence with length INLINEFORM0 , we define a set of INLINEFORM1 real valued scalar variables INLINEFORM2 , with INLINEFORM3 representing a measure of the syntactic relation between the pair of adjacent words INLINEFORM4 . INLINEFORM5 could be the last word in previous sentence or a padding token. For time step INLINEFORM6 , we want to find the closest words INLINEFORM7 , that have larger syntactic distance than INLINEFORM8 . Thus INLINEFORM9 can be defined as: DISPLAYFORM0
where INLINEFORM0 . INLINEFORM1 is the temperature parameter that controls the sensitivity of INLINEFORM2 to the differences between distances.
The Syntactic Distance has some nice properties that both allow us to infer a tree structure from it and be robust to intermediate non-valid tree structures that the model may encounter during learning. In Appendix and we list these properties and further explain the meanings of their values.
BIBREF38 shows that it's possible to identify the beginning and ending words of a constituent using local information. In our model, the syntactic distance between a given token (which is usually represented as a vector word embedding INLINEFORM0 ) and its previous token INLINEFORM1 , is provided by a convolutional kernel over a set of consecutive previous tokens INLINEFORM2 . This convolution is depicted as the gray triangles shown in Figure FIGREF20 . Each triangle here represent 2 layers of convolution. Formally, the syntactic distance INLINEFORM3 between token INLINEFORM4 and INLINEFORM5 is computed by DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 , INLINEFORM1 are the kernel parameters. INLINEFORM2 and INLINEFORM3 can be seen as another convolutional kernel with window size 1, convolved over INLINEFORM4 's. Here the kernel window size INLINEFORM5 determines how far back into the history node INLINEFORM6 can reach while computing its syntactic distance INLINEFORM7 . Thus we call it the look-back range.
Convolving INLINEFORM0 and INLINEFORM1 on the whole sequence with length INLINEFORM2 yields a set of distances. For the tokens in the beginning of the sequence, we simply pad INLINEFORM3 zero vectors to the front of the sequence in order to get INLINEFORM4 outputs.
Reading Network
The Reading Network generate new states INLINEFORM0 considering on input INLINEFORM1 , previous memory states INLINEFORM2 , and gates INLINEFORM3 , as shown in Eq. EQREF9 .
Similar to Long Short-Term Memory-Network (LSTMN) BIBREF39 , the Reading Network maintains the memory states by maintaining two sets of vectors: a hidden tape INLINEFORM0 , and a memory tape INLINEFORM1 , where INLINEFORM2 is the upper bound for the memory span. Hidden states INLINEFORM3 is now represented by a tuple of two vectors INLINEFORM4 . The Reading Network captures the dependency relation by a modified attention mechanism: structured attention. At each step of recurrence, the model summarizes the previous recurrent states via the structured attention mechanism, then performs a normal LSTM update, with hidden and cell states output by the attention mechanism.
At each time step INLINEFORM0 , the read operation attentively links the current token to previous memories with a structured attention layer: DISPLAYFORM0
where, INLINEFORM0 is the dimension of the hidden state. Modulated by the gates in Eq. EQREF13 , the structured intra-attention weight is defined as: DISPLAYFORM0
This yields a probability distribution over the hidden state vectors of previous tokens. We can then compute an adaptive summary vector for the previous hidden tape and memory denoting by INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0
Structured attention provides a way to model the dependency relations shown in Figure FIGREF4 .
The Reading Network takes INLINEFORM0 , INLINEFORM1 and INLINEFORM2 as input, computes the values of INLINEFORM3 and INLINEFORM4 by the LSTM recurrent update BIBREF40 . Then the write operation concatenates INLINEFORM5 and INLINEFORM6 to the end of hidden and memory tape.
Predict Network
Predict Network models the probability distribution of next word INLINEFORM0 , considering on hidden states INLINEFORM1 , and gates INLINEFORM2 . Note that, at time step INLINEFORM3 , the model cannot observe INLINEFORM4 , a temporary estimation of INLINEFORM5 is computed considering on INLINEFORM6 : DISPLAYFORM0
From there we compute its corresponding INLINEFORM0 and INLINEFORM1 for Eq. EQREF10 . We parametrize INLINEFORM2 function as: DISPLAYFORM0
where INLINEFORM0 is an adaptive summary of INLINEFORM1 , output by structured attention controlled by INLINEFORM2 . INLINEFORM3 could be a simple feed-forward MLP, or more complex architecture, like ResNet, to add more depth to the model.
Experiments
We evaluate the proposed model on three tasks, character-level language modeling, word-level language modeling, and unsupervised constituency parsing.
Character-level Language Model
From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.
When training, we use truncated back-propagation, and feed the final memory position from the previous batch as the initial memory of next one. At the beginning of training and test time, the model initial hidden states are filled with zero. Optimization is performed with Adam using learning rate INLINEFORM0 , weight decay INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . We carry out gradient clipping with maximum norm 1.0. The learning rate is multiplied by 0.1 whenever validation performance does not improve during 2 checkpoints. These checkpoints are performed at the end of each epoch. We also apply layer normalization BIBREF41 to the Reading Network and batch normalization to the Predict Network and parsing network. For all of the character-level language modeling experiments, we apply the same procedure, varying only the number of hidden units, mini-batch size and dropout rate.
we process the Penn Treebank dataset BIBREF42 by following the procedure introduced in BIBREF43 . For character-level PTB, Reading Network has two recurrent layers, Predict Network has one residual block. Hidden state size is 1024 units. The input and output embedding size are 128, and not shared. Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 , upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 100 timesteps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0, 0.25, 0.1) respectively.
In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. In other words, if the model sees a space, it will attend on all previous step. If the model sees a letter, it will attend no further then the last space step. The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data. As a result our model achieve state-of-the-art performance and significantly outperform baseline models. It is worth noting that HM-LSTM BIBREF6 also unsupervisedly induce similar structure from data. But discrete operations in HM-LSTM make their training procedure more complicated then ours.
Word-level Language Model
Comparing to character-level language modeling, word-level language modeling needs to deal with complex syntactic structure and various linguistic phenomena. But it has less long-term dependencies. We evaluate the word-level variant of our language model on a preprocessed version of the Penn Treebank (PTB) BIBREF42 and Text8 BIBREF49 dataset.
We apply the same procedure and hyper-parameters as in character-level language model. Except optimization is performed with Adam with INLINEFORM0 . This turns off the exponential moving average for estimates of the means of the gradients BIBREF50 . We also adapt the number of hidden units, mini-batch size and the dropout rate according to the different tasks.
we process the Penn Treebank dataset BIBREF43 by following the procedure introduced in BIBREF51 . For word-level PTB, the Reading Network has two recurrent layers and the Predict Network do not have residual block. The hidden state size is 1200 units and the input and output embedding sizes are 800, and shared BIBREF52 , BIBREF53 . Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 and the upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 35 time-steps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0.7, 0.5, 0.5) respectively.
dataset contains 17M training tokens and has a vocabulary size of 44k words. The dataset is partitioned into a training set (first 99M characters) and a development set (last 1M characters) that is used to report performance. As this dataset contains various articles from Wikipedia, the longer term information (such as current topic) plays a bigger role than in the PTB experiments BIBREF61 . We apply the same procedure and hyper-parameters as in character-level PTB, except we use a batch size of 128. The values used of dropout on input/output embeddings, between Recurrent Layers and on recurrent states were (0.4, 0.2, 0.2) respectively.
In Table TABREF39 , our results are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BIBREF50 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table TABREF42 , our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix . In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention.
Unsupervised Constituency Parsing
The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 ( INLINEFORM0 ) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section SECREF14 , our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BIBREF11 , BIBREF13 . Our model is compared with the several baseline methods, that are explained in Appendix .
Different from the previous experiment setting, the model treat each sentence independently during train and test time. When training, we feed one batch of sentences at each iteration. In a batch, shorter sentences are padded with 0. At the beginning of the iteration, the model's initial hidden states are filled with zero. When testing, we feed on sentence one by one to the model, then use the gate value output by the model to recursively combine tokens into constituents, as described in Appendix .
Table TABREF44 summarizes the results. Our model significantly outperform the RANDOM baseline indicate a high consistency with human annotation. Our model also shows a comparable performance with CCM model. In fact our parsing network and CCM both focus on the relation between successive tokens. As described in Section SECREF14 , our model computes syntactic distance between all successive pair of tokens, then our parsing algorithm recursively assemble tokens into constituents according to the learned distance. CCM also recursively model the probability whether a contiguous subsequences of a sentence is a constituent. Thus, one can understand how our model is outperformed by DMV+CCM and UML-DOP models. The DMV+CCM model has extra information from a dependency parser. The UML-DOP approach captures both contiguous and non-contiguous lexical dependencies BIBREF13 .
Conclusion
In this paper, we propose a novel neural language model that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. We introduce a new neural parsing network: Parsing-Reading-Predict Network, that can make differentiable parsing decisions. We use a new structured attention mechanism to control skip connections in a recurrent neural network. Hence induced syntactic structure information can be used to improve the model's performance. Via this mechanism, the gradient can be directly back-propagated from the language model loss function into the neural Parsing Network. The proposed model achieve (or is close to) the state-of-the-art on both word/character-level language modeling tasks. Experiment also shows that the inferred syntactic structure highly correlated to human expert annotation.
Acknowledgement
The authors would like to thank Timothy J. O'Donnell and Chris Dyer for the helpful discussions. | BPC, Perplexity |
ee9b95d773e060dced08705db8d79a0a6ef353da | ee9b95d773e060dced08705db8d79a0a6ef353da_0 | Q: How are content clusters used to improve the prediction of incident severity?
Text: Introduction
The vast amounts of data collected by healthcare providers in conjunction with modern data analytics present a unique opportunity to improve the quality and safety of medical care for patient benefit BIBREF1. Much recent research in this area has been on personalised medicine, with the aim to deliver improved diagnostic and treatment through the synergistic integration of datasets at the level of the individual. A different source of healthcare data pertains to organisational matters. In the United Kingdom, the National Health Service (NHS) has a long history of documenting the different aspects of healthcare provision, and is currently in the process of making available properly anonymised datasets, with the aim of leveraging advanced analytics to improve NHS services.
One such database is the National Reporting and Learning System (NRLS), a repository of patient safety incident reports from the NHS in England and Wales set up in 2003, which now contains over 13 million records. The incidents are reported under standardised categories and contain both organisational and spatio-temporal information (structured data) and a substantial component of free text (unstructured data) where incidents are described in the `voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission or discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into complex processes in healthcare with a view towards service improvement.
Although statistical analyses are routinely performed on the structured data (dates, locations, hand-coded categories, etc), free text is typically read manually and often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. These limitations are due to a lack of methodologies that can provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Automatic categorisation of incidents from free text would sidestep human error and difficulties in assigning incidents to a priori pre-defined lists in the reporting system. Such tools can also offer unbiased insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services.
In this work, we showcase an algorithmic methodology that detects content-based groups of records in an unsupervised manner, based only on the free (unstructured) textual descriptions of the incidents. To do so, we combine deep neural-network high-dimensional text-embedding algorithms with graph-theoretical methods for multiscale clustering. Specifically, we apply the framework of Markov Stability (MS), a multiscale community detection algorithm, to sparsified graphs of documents obtained from text vector similarities. Our method departs both from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents BIBREF2, and from more recent approaches that have used deep neural network based language models, but have used k-means clustering without a graph-based analysis BIBREF3. Previous applications of network theory to text analysis have included the work of Lanchichinetti and co-workers BIBREF4, who proposed a probabilistic graph construction analysed with the InfoMap algorithm BIBREF5; however, their community detection was carried out at a single-scale and the BoW representation of text lacks the power of text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than from pre-designed classifications. The obtained results can help mitigate human error or effort in finding the right category in complex classification trees. We illustrate in our analysis the insight gained from this unsupervised, multi-resolution approach in this specialised corpus of medical records.
As an additional application, we use machine learning methods for the prediction of the degree of harm of incidents directly from the text in the NRLS incident reports. Although the degree of harm is recorded by the reporting person for every event, this information can be unreliable as reporters have been known to game the system, or to give different answers depending on their professional status BIBREF6. Previous work on predicting the severity of adverse events BIBREF7, BIBREF8 used reports submitted to the Advanced Incident Management System by Australian public hospitals, and used BoW and Support Vector Machines (SVMs) to detect extreme-risk events. Here we demonstrate that publicly reported measures derived from NHS Staff Surveys can help select ground truth labels that allow supervised training of machine learning classifiers to predict the degree of harm directly from text embeddings. Further, we show that the unsupervised clusters of content derived with our method improve the classification results significantly.
An a posteriori manual labelling by three clinicians agree with our predictions based purely on text almost as much as with the original hand-coded labels. These results indicate that incidents can be automatically classified according to their degree of harm based only on their textual descriptions, and underlines the potential of automatic document analysis to help reduce human workload.
Introduction ::: Data description
The full dataset includes more than 13 million confidential reports of patient safety incidents reported to the National Reporting and Learning System (NRLS) between 2004 and 2016 from NHS trusts and hospitals in England and Wales. Each record has more than 170 features, including organisational details (e.g., time, trust code and location), anonymised patient information, medication and medical devices, among many other details. In most records, there is also a detailed description of the incident in free text, although the quality of the text is highly variable.
The records are manually classified by operators according to a two-level system of incident types. The top level contains 15 categories including general classes such as `Patient accident', `Medication', `Clinical assessment', `Documentation', `Admissions/Transfer' or `Infrastructure', alongside more specific groups such as `Aggressive behaviour', `Patient abuse', `Self-harm' or `Infection control'.
Each record is also labelled based on the degree of harm to the patients as one of: `No Harm', `Low Harm', `Moderate Harm', `Severe Harm' or `Death'. These degrees are precisely defined by the WHO BIBREF9 and the NHS BIBREF10.
Graph-based framework for text analysis and clustering
Our framework combines text-embedding, geometric graph construction and multi-resolution community detection to identify, rather than impose, content-based clusters from free, unstructured text in an unsupervised manner.
Figure FIGREF2 shows a summary of our pipeline. First, we pre-process each document to transform text into consecutive word tokens, with words in their most normalised forms and some words removed if they have no distinctive meaning when used out of context BIBREF11, BIBREF12. We then train a paragraph vector model using the Document to Vector (Doc2Vec) framework BIBREF13 on the full set (13 million) of pre-processed text records. (Training a vector model on smaller sets of 1 million records also produces good results as seen in Table TABREF5). This training step of the text model is only done once.
The trained Doc2Vec model is subsequently used to infer high-dimensional vector descriptions for the text of each document in our target analysis set. We then compute a matrix containing all the pairwise (cosine) similarities between the Doc2Vec document vectors. This similarity matrix can be thought of as the adjacency matrix of a full, weighted graph with documents as nodes and edges weighted by their similarity. We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity. MS uses a diffusive process on the graph to reveal the multiscale organisation at different resolutions without the need to choose a priori the number or type of clusters.
The partitions found by MS across levels of resolution are analysed a posteriori through visualisations and quantitative scores. The visualisations include: (i) word clouds to summarise the main content; (ii) graph layouts; and (iii) Sankey diagrams and contingency tables that capture correspondences between partitions. The quantitative scores include: (i) the intrinsic topic coherence (measured by the pairwise mutual information BIBREF19, BIBREF20); and (ii) the similarity to hand-coded categories (measured by the normalised mutual information BIBREF21).
Our framework also covers prediction of the degree of harm (DoH) caused to the patient usig text embeddings and the unsupervised cluster assignments obtaind from our multiscale graph partitioning. To perform this task, we use the hand-coded DoH from the NRLS to train three commonly used classifiers BIBREF22, BIBREF23 (Ridge, Support Vector Machine with a linear kernel, Random Forest) to predict the DoH using TF-iDF and Doc2Vec embeddings of the text and our MS cluster assignments. The classifiers are then evaluated in predicting the DoH using cross-validation.
We now explain the steps of the methodological pipeline in more detail.
Graph-based framework for text analysis and clustering ::: Text Preprocessing
Text preprocessing is important to enhance the performance of text embedding techniques. We applied standard preprocessing to the raw text of all 13 million records in our corpus, as follows. We divide our documents into iterative word tokens using the NLTK library BIBREF11 and remove punctuation and digit-only tokens. We then apply word stemming using the Porter algorithm BIBREF12, BIBREF24. If the Porter method cannot find a stemmed version for a token, we apply the Snowball algorithm BIBREF25. Finally, we remove any stop-words (repeat words with low content) using NLTK's stop-word list. Although pre-processing reduces some of the syntactic information, it consolidates the semantic information of the vocabulary. We note that the incident descriptions contain typos and acronyms, which have been left uncorrected to avoid manual intervention or the use of spell checkers, so as to mimic as closely as possible a realistic scenario.
Graph-based framework for text analysis and clustering ::: Text Vector Embedding
Computational text analysis relies on a mathematical representation of the base units of text (character $n$-grams, words or documents). Since our methodology is unsupervised, we avoid the use of labelled data, in contrast to supervised or semi-supervised classification methods BIBREF26, BIBREF27. In our work, we use a representation of text documents as vectors following recent developments in the field.
Traditionally, bag-of-words (BoW) methods represented documents as vectors of word frequencies weighted by inverse document frequency (TF-iDF). Such methods provide a statistical description of documents but they do not carry information about the order or proximity of words to each other and hence disregard semantic or syntactic relationships between words. In addition, BoW representations carry little information content as they tend to be high-dimensional and very sparse, due to the large size of word dictionaries and low frequencies of many terms.
Recently, deep neural network language models have successfully overcome the limitations of BoW methods by incorporating neighbourhoods in the mathematical description of each term. Distributed Bag of Words (DBOW), better known as Doc2Vec BIBREF13, is a form of Paragraph Vectors (PV) which creates a model that represents any word sequence (i.e. sentences, paragraphs, documents) as $d$-dimensional vectors, where $d$ is user-defined (typically $d=300$). Training a Doc2Vec model starts with a random $d$-dimensional vector assignment for each document in the corpus. A stochastic gradient descent algorithm iterates over the corpus with the objective of predicting a randomly sampled set of words from each document by using only the document's $d$-dimensional vector BIBREF13. The objective function being optimised by PV-DBOW is similar to the skip-gram model in Refs. BIBREF28, BIBREF29. Doc2Vec has been shown BIBREF30 to capture both semantic and syntactic characterisations of the input text, and outperforms BoW-based models such as LDA BIBREF2.
Benchmarking the Doc2Vec training: Here, we use the Gensim Python library BIBREF31 to train the PV-DBOW model. The Doc2Vec training was repeated several times with a variety of training hyper-parameters (chosen based on our own numerical experiments and the general guidelines provided by BIBREF32) in order to optimise the output. To characterise the usability and quality of models, we trained Doc2Vec models using text corpora of different sizes and content with different sets of hyper-parameters. . In particular, we checked the effect of corpus size by training Doc2Vec models on the full 13 million NRLS records and on randomly sampled subsets of 1 million and 2 million records.
Since our target analysis has heavy medical content and specific use of words, we also tested the importance of the training corpus by generating an additional Doc2Vec model using a set of 5 million articles from the English Wikipedia representing standard, generic English usage, which works well in the analysis of news articles BIBREF33.
The results in Table TABREF5 show that training on the highly specific text from the NRLS records is an important ingredient in the successful vectorisation of the documents, as shown by the degraded performance for the Wikipedia model across a variety of training hyper-parameters. On the other hand, reducing the size of the corpus from 13 million to 1 million records did not affect the benchmarking dramatically. This robustness of the results to the size of the training corpus was confirmed further with the use of more detailed metrics, as discussed below in Section SECREF27 (see e.g., Figure FIGREF29).
Based on our benchmarking, henceforth we use the Doc2Vec model trained on the 13+ million NRLS records with the following hyper-parameters: {training method = dbow, number of dimensions for feature vectors size = 300, number of epochs = 10, window size = 15, minimum count = 5, number of negative samples = 5, random down-sampling threshold for frequent words = 0.001 }. As an indication of computational cost, the training of this model takes approximately 11 hours (run in parallel with 7 threads) on shared servers.
Graph-based framework for text analysis and clustering ::: Similarity graph of documents from text similarities
Once the Doc2Vec model is trained, we use it to infer a vector for each record in our analysis subset and construct $\hat{S}$, a similarity matrix between the vectors by: computing the matrix of cosine similarities between all pairs of records, $S_\text{cos}$; transforming it into a distance matrix $D_{cos} = 1-S_{cos}$; applying element-wise max norm to obtain $\hat{D}=\Vert D_{cos}\Vert _{max}$; and normalising the similarity matrix $\hat{S} = 1-\hat{D}$ which has elements in the interval $[0,1]$.
This similarity matrix can be thought of as the adjacency matrix of a fully connected weighted graph. However, such a graph contains many edges with small weights reflecting the fact that in high-dimensional noisy data even the least similar nodes present a substantial degree of similarity. Indeed, such weak similarities are in most cases redundant and can be explained through stronger pairwise similarities. These weak, redundant edges obscure the graph structure, as shown by the diffuse visualisation in Figure FIGREF7A.
To reveal the graph structure, we sparsify the similarity matrix to obtain a MST-kNN graph BIBREF14 based on a geometric heuristic that preserves the global connectivity of the graph while retaining details about the local geometry of the dataset. The MST-kNN algorithm starts by computing the minimum spanning tree (MST) of the full matrix $\hat{D}$, i.e., the tree with $(N-1)$ edges connecting all nodes in the graph with minimal sum of edge weights (distances). The MST is computed using the Kruskal algorithm implemented in SciPy BIBREF34. To this MST, we add edges connecting each node to its $k$ nearest nodes (kNN) if they are not already in the MST. Here $k$ is an user-defined parameter that regulates the sparsity of the resulting graph. The binary adjacency matrix of the MST-kNN graph is Hadamard-multiplied with $\hat{S}$ to give the adjacency matrix $A$ of the weighted, undirected sparsified graph.
The network visualisations in Figure FIGREF7 give an intuitive picture of the effect of sparsification as $k$ is decreased. If $k$ is very small, the graph is very sparse but not robust to noise. As $k$ is increased, the local similarities between documents induce the formation of dense subgraphs (which appear closer in the graph visualisation layout). When the number of neighbours becomes too large, the local structure becomes diffuse and the subgraphs lose coherence, signalling the degradation of the local graph structure. Relatively sparse graphs that preserve important edges and global connectivity of the dataset (guaranteed here by the MST) have computational advantages when using community detection algorithms.
Although we use here the MST-kNN construction due to its simplicity and robustness, network inference, graph sparsification and graph construction from data is an active area of research, and several alternatives exist based on different heuristics, e.g., Graphical Lasso BIBREF35, Planar Maximally Filtered Graph BIBREF36, spectral sparsification BIBREF37, or the Relaxed Minimum Spanning Tree (RMST) BIBREF38. We have experimented with some of those methods and obtained comparable results. A detailed comparison of sparsification methods as well as the choice of distance in defining the similarity matrix $\hat{S}$ is left for future work.
Graph-based framework for text analysis and clustering ::: Multiscale Graph Partitioning
Community detection encompasses various graph partitioning approaches which aim to find `good' partitions into subgraphs (or communities) according to different cost functions, without imposing the number of communities a priori BIBREF39. The notion of community depends on the choice of cost function. Commonly, communities are taken to be subgraphs whose nodes are connected strongly within the community with relatively weak inter-community edges. Such structural notion is related to balanced cuts. Other cost functions are posed in terms of transitions inside and outside of the communities, usually as one-step processes BIBREF5. When transition paths of all lengths are considered, the concept of community becomes intrinsically multi-scale, i.e., different partitions are relevant at different time scales leading to a multi-level description dictated by the transition dynamics BIBREF15, BIBREF40, BIBREF16. This leads to the framework of Markov Stability (MS), a dynamics-based, multi-scale community detection methodology, which recovers several well-known heuristics as particular cases BIBREF15, BIBREF17, BIBREF18.
MS is an unsupervised community detection method that finds robust and stable partitions of a graph (and the associated communities) under the evolution of a continuous-time diffusion process without a priori choice of the number or type of communities or their relative relationships BIBREF15, BIBREF40, BIBREF16, BIBREF41 . In simple terms, MS can be understood by analogy to a drop of ink diffusing on the graph: the ink diffuses homogeneously unless the graph has intrinsic sub-structures, in which case the ink gets transiently contained, over particular time scales, within groups of nodes. The existence of such transients indicates a natural scale to partition the graph along the subgraphs (or communities) where the diffusion is transiently trapped. As the process continues to evolve, the ink diffuses out of those communities but might get transiently contained in other, larger subgraphs, if such multi-level structure exists. By analysing the Markov dynamics over time, MS detects the structure of the graph across scales. If a graph has no natural scales for partitioning, then MS returns no communities. The Markov time $t$ thus acts as a resolution parameter that allows us to extract robust partitions that persist over particular time scales, in an unsupervised manner.
Mathematically, given the adjacency matrix $A_{N \times N}$ of the graph obtained as described previously, let us define the diagonal matrix $D=\text{diag}(\mathbf {d})$, where $\mathbf {d}=A \mathbf {1}$ is the degree vector. The random walk Laplacian matrix is defined as $L_\text{RW}=I_N-D^{-1}A$, where $I_N$ is the identity matrix of size $N$ and the transition matrix (or kernel) of the associated continuous-time Markov process is $P(t)=e^{-t L_\text{RW}}, \, t>0$ BIBREF16. Any partition $\mathcal {H}$ into $C$ clusters is associated with a binary membership matrix $H_{N \times C}$ that maps the $N$ nodes into the clusters. Below, we will use the matrix $H$ to denote the corresponding partition $\mathcal {H}$. We can then compute the $C\times C$ clustered autocovariance matrix:
where $\pi $ is the steady-state distribution of the process and $\Pi =\text{diag}(\pi )$. The element $[R(t,H)]_{\alpha \beta }$ quantifies the probability that a random walker starting from community $\alpha $ at $t=0$ will be in community $\beta $ at time $t$, minus the probability that this event occurs by chance at stationarity.
The above definitions allow us to introduce our cost function measuring the goodness of a partition over time $t$, termed the Markov Stability of partition $H$:
A partition $H$ that maximises $r(t,H)$ is comprised of communities that preserve the flow within themselves over time $t$, since in that case the diagonal elements of $R(t,H)$ will be large and the off-diagonal elements will be small. For details, see BIBREF15, BIBREF40, BIBREF16, BIBREF42.
Our computational algorithm thus searches for partitions at each Markov time $t$ that maximise $r(t,H)$. Although the maximisation of (DISPLAY_FORM11) is an NP-hard problem (hence with no guarantees for global optimality), there are efficient optimisation methods that work well in practice. Our implementation here uses the Louvain Algorithm BIBREF43, BIBREF18 which is efficient and known to give good results when applied to benchmarks. To obtain robust partitions, we run the Louvain algorithm 500 times with different initialisations at each Markov time and pick the best 50 with the highest Markov Stability value $r(t,H)$. We then compute the variation of information BIBREF44 of this ensemble of solutions $VI(t)$, as a measure of the reproducibility of the result under the optimisation. In addition, we search for partitions that are persistent across time $t$, as given by low values of the variation of information between optimised partitions across time $VI(t,t^{\prime })$. Robust partitions are therefore indicated by Markov times where $VI(t)$ shows a dip and $VI(t,t^{\prime })$ has an extended plateau with low values, indicating consistency under the optimisation and validity over extended scales BIBREF42, BIBREF16. Below, we apply MS to find partitions across scales of the similarity graph of documents, $A$. The communities detected correspond to groups of documents with similar content at different levels of granularity.
Graph-based framework for text analysis and clustering ::: Visualisation and interpretation of the results
Graph layouts: We use the ForceAtlas2 BIBREF45 layout algorithm to represent graphs on the plane. This layout assigns a harmonic spring to each edge and finds through iterative rearrangements finds an arrangement on the plane that balances attractive and repulsive forces between nodes. Hence similar nodes tend to appear close together on this layout. We colour the nodes by either hand-coded categories (Figure FIGREF7) or multiscale MS communities (Figure FIGREF21). Spatially coherent colourings on this layout imply good clusters in terms of the similarity graph.
Tracking membership through Sankey diagrams: Sankey diagrams allow us to visualise the relationship of node membership across different partitions and with respect to the hand-coded categories. Two-layer Sankey diagrams (e.g., Fig. FIGREF22) reflect the correspondence between MS clusters and the hand-coded external categories, whereas we use a multilayer Sankey diagram in Fig. FIGREF21 to present the multi-resolution MS community detection across scales.
Normalised contingency tables: To capture the relationship between our MS clusters and the hand-coded categories, we also provide a complementary visualisation as z-score heatmaps of normalised contingency tables, e.g., Fig. FIGREF22. This allows us to compare the relative association of content clusters to the external categories at different resolution levels. A quantification of the overall correspondence is also provided by the $NMI$ score in Eq. (DISPLAY_FORM17).
Word clouds of increased intelligibility through lemmatisation: Our method clusters text documents according to their intrinsic content. This can be understood as a type of topic detection. To visualise the content of clusters, we use Word Clouds as basic, yet intuitive, summaries of information to extract insights and compare a posteriori with hand-coded categories. They can also provide an aid for monitoring results when used by practitioners.
The stemming methods described in Section SECREF3 truncate words severely to enhance the power of the language processing computational methods by reducing the redundancy in the word corpus. Yet when presenting the results back to a human observer, it is desirable to report the cluster content with words that are readily comprehensible. To generate comprehensible word clouds in our a posteriori analyses, we use a text processing method similar to the one described in BIBREF46. Specifically, we use the part of speech (POS) tagging module from NLTK to leave out sentence parts except the adjectives, nouns, and verbs. We also remove less meaningful common verbs such as `be', `have', and `do' and their variations. The remaining words are then lemmatised in order to normalise variations of the same word. Finally, we use the Python library wordcloud to create word clouds with 2 or 3-gram frequency list of common word groups.
Graph-based framework for text analysis and clustering ::: Quantitative benchmarking of topic clusters
Although our dataset has a classification hand-coded by a human operator, we do not use it in our analysis. Indeed, one of our aims is to explore the relevance of the fixed external classes as compared to content-driven groupings obtained in an unsupervised manner. Therefore we provide a double route to quantify the quality of the clusters by computing two complementary measures: (i) an intrinsic measure of topic coherence, and (ii) a measure of similarity to the external hand-coded categories.
Topic coherence of text: As an intrinsic measure of consistency of word association, we use the pointwise mutual information ($PMI$) BIBREF19, BIBREF47. The $PMI$ is an information-theoretical score that captures the probability of words being used together in the same group of documents. The $PMI$ score for a pair of words $(w_1,w_2)$ is:
where the probabilities of the words $P(w_1)$, $P(w_2)$, and of their co-occurrence $P(w_1 w_2)$ are obtained from the corpus. We obtain an aggregate $\widehat{PMI}$ for the graph partition $C=\lbrace c_i\rbrace $ by computing the $PMI$ for each cluster, as the median $PMI$ between its 10 most common words (changing the number of words gives similar results), and computing the weighted average of the $PMI$ cluster scores:
where $c_i$ denotes the clusters in partition $C$, each with size $n_i$, so that $N=\sum _{c_i \in C} n_i$ is the total number of nodes. Here $S_i$ denotes the set of top 10 words for cluster $c_i$.
The $PMI$ score has been shown to perform well BIBREF19, BIBREF47 when compared to human interpretation of topics on different corpora BIBREF48, BIBREF49, and is designed to evaluate topical coherence for groups of documents, in contrast to other tools aimed at short forms of text. See BIBREF26, BIBREF27, BIBREF50, BIBREF51 for other examples.
Here, we use the $\widehat{PMI}$ score to evaluate partitions without any reference to an externally labelled `ground truth'.
Similarity between the obtained partitions and the hand-coded categories: To quantify how our content-driven unsupervised clusters compare against the external classification, we use the normalised mutual information ($NMI$), a well-known information-theoretical score that quantifies the similarity between clusterings considering correct and incorrect assignments in terms of the information between the clusterings. The NMI between two partitions $C$ and $D$ of the same graph is:
where $I(C,D)$ is the Mutual Information and $H(C)$ and $H(D)$ are the entropies of the two partitions.
The $NMI$ is bounded ($0 \le NMI \le 1$) and a higher value corresponds to higher similarity of the partitions (i.e., $NMI=1$ when there is perfect agreement between partitions $C$ and $D$). The $NMI$ score is directly related to the V-measure in the computer science literature BIBREF52.
Graph-based framework for text analysis and clustering ::: Supervised Classification for Degree of Harm
As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident directly from the text and the hand-coded features (e.g., external category, medical specialty, location). A one-hot encoding is applied to turn these categorical values into numerical ones. We also checked if using our unsupervised content-driven cluster labels as additional features can improve the performance of the supervised classification.
The supervised classification was carried out by training on features and text three classifiers commonly applied to text classification tasks BIBREF22, BIBREF23: a Ridge classifier, Support Vector Machines with a linear kernel, and Random Forests. The goal is to predict the degree of harm (DoH) among five possible values (1-5). The classification is carried out with five-fold cross validation, using 80% of the data to train the model and the remaining 20% to test it. As a measure of performance of the classifiers and models, we use the weighted average of the F1 score for all levels of DoH, which takes into account both precision and recall, i.e., both the exactness and completeness of the model.
Application to the clustering of hospital incident text reports
We showcase our methodology through the analysis of the text from NRLS patient incident reports. In addition to textual descriptions, the reports are hand-coded upon reporting with up to 170 features per case, including a two-level manual classification of the incidents.
Here, we only use the text component and apply our graph-based text clustering to a set of 3229 reports from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014. As summarised in Figure FIGREF2, we start by training our Doc2Vec text embedding using the full 13+ million records collected by the NRLS since 2004 (although, as discussed above, a much smaller corpus of NRLS documents can be used). We then infer vectors for our 3229 records, compute the cosine similarity matrix and construct an MST-kNN graph with $k=13$ for our graph-based clustering. (We have confirmed the robustness of the MST-kNN construction in our data for $k>13$ by scanning values of $k \in [1,50]$, see Section SECREF27). We then applied Markov Stability, a multi-resolution graph partitioning algorithm to the MST-kNN graph. We scan across Markov time ($t \in [0.01, 100]$ in steps of 0.01). At each $t$, we run 500 independent Louvain optimisations to select the optimal partition found, as well as quantifying the robustness to optimisation by computing the average variation of information $VI(t)$ between the top 50 partitions. Once the full scan across $t$ is finalised, we compute $VI(t,t^{\prime })$, the variation of information between the optimised partitions found across the scan in Markov time, to select partitions that are robust across scales.
Application to the clustering of hospital incident text reports ::: Markov Stability extracts content clusters at different levels of granularity
Figure FIGREF21 presents a summary of our MS analysis. We plot the number of clusters of the optimal partition and the two metrics of variation of information across all Markov times. The existence of a long plateau in $VI(t,t^{\prime })$ coupled to a dip in $VI(t)$ implies the presence of a partition that is robust both to the optimisation and across Markov time. To illustrate the multi-scale features of the method, we choose several of these robust partitions, from finer (44 communities) to coarser (3 communities), obtained at five Markov times and examine their structure and content. The multi-level Sankey diagram summarises the relationship of the partitions across levels.
The MS analysis of the graph reveals a multi-level structure of partitions, with a strong quasi-hierarchical organisation. We remark that our optimisation does not impose any hierarchical structure a priori, so that the observed consistency of communities across levels is intrinsic to the data and suggests the existence of sub-themes that integrate into larger thematic categories. The unsupervised detection of intrinsic scales by MS enables us to obtain groups of records with high content similarity at different levels of granularity. This capability can be used by practitioners to tune the level of description to their specific needs, and is used below as an aid in our supervised classification task in Section SECREF4.
To ascertain the relevance of the layers of content found by MS, we examined the five levels of resolution in Figure FIGREF21. For each level, we produced lemmatised word clouds, which we used to generate descriptive content labels for the communities. We then compared a posteriori the content clusters with the hand-coded categories through a Sankey diagram and a contingency table. The results are shown in Figures FIGREF22–FIGREF25 for each of the levels.
The partition into 44 communities presents content clusters with well-defined characterisations, as shown by the Sankey diagram and the highly clustered structure of the contingency table (Figure FIGREF22). Compared to the 15 hand-coded categories, this 44-community partition provides finer groupings corresponding to specific sub-themes within the generic hand-coded categories. This is apparent in the hand-coded classes `Accidents', `Medication', `Clinical assessment', `Documentation' and `Infrastructure', where a variety of meaningful subtopics are identified (see Fig. FIGREF23 for details). In other cases, however, the content clusters cut across the external categories, e.g., the clusters on labour ward, chemotherapy, radiotherapy and infection control are coherent in content but can belong to several of the external classes. At this level of resolution, our algorithm also identified highly specific topics as separate content clusters, including blood transfusions, pressure ulcer, consent, mental health, and child protection, which have no direct relationship with the external classes provided to the operator.
Figure FIGREF24A and FIGREF24B present the results for two partitions at medium level of resolution, where the number of communities (12 and 17) is close to that of hand-coded categories (15). As expected from the quasi-hierarchy detected by our multi-resolution analysis, we find that the communities in the 17-way and 12-way partitions emerge from consistent aggregation of the smaller communities in the 44-way partition in Figure FIGREF22. Focussing on the 12-way partition, we see that some of the sub-themes in Figure FIGREF23 are merged into more general topics. An example is Accidents (community 2 in Fig. FIGREF24A), a merger of seven finer communities, which corresponds well with the external category `Patient accidents'. A similar phenomenon is seen for the Nursing cluster (community 1), which falls completely under the external category `Infrastructure'. The clusters related to `Medication' similarly aggregate into a larger community (community 3), yet there still remains a smaller, specific community related to Homecare medication (community 12) with distinct content. Other communities, on the other hand, still strand across external categories. This is clearly observable in communities 10 and 11 (Samples/ lab tests/forms and Referrals/appointments), which fall naturally across the `Documentation' and `Clinical Assessment'. Similarly, community 9 (Patient transfers) sits across the `Admission/Transfer' and `Infrastructure' external categories, due to its relation to nursing and hospital constraints. A substantial proportion of records was hand-coded under the generic `Treatment/Procedure' class, yet MS splits into into content clusters that retain medical coherence, e.g., Radiotherapy (Comm. 4), Blood transfusions (Comm. 7), IV/cannula (Comm. 5), Pressure ulcer (Comm. 8), and the large community Labour ward (Comm. 6).
The medical specificity of the Radiotherapy, Pressure ulcer and Labour ward clusters means that they are still preserved as separate groups to the next level of coarseness in the 7-way partition (Figure FIGREF25A). The mergers in this case lead to a larger communities referring to Medication, Referrals/Forms and Staffing/Patient transfers. Figure FIGREF25B shows the final level of agglomeration into 3 content clusters: records referring to Accidents; a group broadly referring to matters Procedural (referrals, forms, staffing, medical procedures) cutting across external categories; and the Labour ward cluster, still on its own as a subgroup with distinctive content.
This process of agglomeration of content, from sub-themes into larger themes, as a result of the multi-scale hierarchy of MS graph partitions is shown explicitly with word clouds in Figure FIGREF26 for the 17-, 12- and 7-way partitions. Our results show good overall correspondence with the hand-coded categories across resolutions, yet our results also reveal complementary categories of incidents not defined in the external classification. The possibility of tuning the granularity afforded by our method can be used to provide a distinct level of resolution in certain areas corresponding to specialised or particular sub-themes.
Application to the clustering of hospital incident text reports ::: Robustness of the results and comparison with other methods
We have examined quantitatively the robustness of the results to parametric and methodological choices in different steps of our framework. Specifically, we evaluate the effect of: (i) using Doc2Vec embeddings instead of BoW vectors; (ii) the size of corpus for training Doc2Vec; (iii) the sparsity of the MST-kNN graph construction. We have also carried out quantitative comparisons to other methods for topic detection and clustering: (i) LDA-BoW, and (ii) several standard clustering methods.
Doc2Vec provides improved clusters compared to BoW: As compared to standard bag of words (BoW), fixed-sized vector embeddings (Doc2Vec) produces lower dimensional vector representations with higher semantic and syntactic content. Doc2Vec outperforms BoW representations in practical benchmarks of semantic similarity and is less sensitive to hyper-parameters BIBREF30. To quantify the improvement provided by Doc2Vec, we constructed a MST-kNN graph from TF-iDF vectors and ran MS on this TF-iDF similarity graph. Figure FIGREF28 shows that Doc2Vec outperforms BoW across all resolutions in terms of both $NMI$ and $\widehat{PMI}$ scores.
Robustness to the size of the Doc2Vec training dataset : Table TABREF5 indicates a small effect of the size of the training corpus on the Doc2Vec model. To confirm this, we trained two additional Doc2Vec models on sets of 1 million and 2 million records (randomly chosen from the full 13+ million records) and followed the same procedure to construct the MST-kNN graph and carry out the MS analysis. Figure FIGREF29 shows that the performance is affected only mildly by the size of the Doc2Vec training set.
Robustness to the level of graph sparsification:
We sparsify the matrix of cosine similarities using the MST-kNN graph construction. The smaller the value of $k$, the sparser the graph. Sparser graphs have computational advantages for community detection algorithms, but too much sparsification degrades the results. Figure FIGREF30 shows the effect of sparsification in the graph construction on the performance of MS clusters. Our results are robust to the choice of $k$, provided it is not too small: both the $NMI$ and $\widehat{PMI}$ scores reach a similar level for values of $k$ above 13-16. Due to computational efficiency, we favour a relatively small value of $k=13$.
Comparison of MS partitions to Latent Dirichlet Allocation with Bag-of-Words (LDA-BoW): We have compared the MS results to LDA, a widely used methodology for text analysis. A key difference in LDA is that a different model needs to be trained when the number of topics changes, whereas our MS method produces clusterings at all levels of resolution in one go. To compare the outcomes, we trained five LDA models corresponding to the five MS levels in Figure FIGREF21. Table TABREF31 shows that MS and LDA give partitions that are comparably similar to the hand-coded categories (as measured with $NMI$), with some differences depending on the scale, whereas the MS clusters have higher topic coherence (as given by $\widehat{PMI}$) across all scales.
To give an indication of computational cost, we ran both methods on the same servers. Our method takes approximately 13 hours in total (11 hours to train the Doc2Vec model on 13 million records and 2 hours to produce the full MS scan with 400 partitions across all resolutions). The time required to train just the 5 LDA models on the same corpus amounts to 30 hours (with timings ranging from $\sim $2 hours for the 3 topic LDA model to 12.5 hours for the 44 topic LDA model). This comparison also highlights the conceptual difference between our multi-scale methodology and LDA topic modelling. While LDA computes topics at a pre-determined level of resolution, our method obtains partitions at all resolutions in one sweep of the Markov time, from which relevant partitions are chosen based on their robustness. The MS partitions at all resolutions are available for further investigation if so needed.
Comparison of MS to other partitioning and community detection algorithms: We have partitioned the same kNN-MST graph using several well-known algorithms readily available in code libraries (i.e., the iGraph module for Python): Modularity Optimisation BIBREF53, InfoMap BIBREF5, Walktrap BIBREF54, Label Propagation BIBREF55, and Multi-resolution Louvain BIBREF43. Note that, in contrast with our multiscale MS analysis, these methods give just one partition at a particular resolution (or two for the Louvain implementation in iGraph). Figure FIGREF32 shows that MS provides improved or equal results to all those other graph partitioning methods for both $NMI$ and $\widehat{PMI}$ across all scales. Only for very fine resolution (more than 50 clusters) does Infomap, which partitions graphs into small clique-like subgraphs BIBREF40, BIBREF56, provide a slightly improved $NMI$. Therefore, MS finds both relevant and high quality clusterings across all scales by sweeping the Markov time parameter.
Using free-text descriptions to predict the degree of harm of patient safety incidents with a supervised classifier
Here we approach the task of training a supervised classifier that predicts the degree of harm of an incident based on other features of the record (such as location, external category, and medical specialty) and on the textual component of the report. To this end, we use the embedded text vectors and MS cluster labels of the records as features to predict the degree of harm to the patient.
Each NRLS record has more than 170 features filled manually by healthcare staff, including the degree of harm (DoH) to the patient, a crucial assessment of the reported incident. The incident is classified into five levels: 'No harm', 'Low', 'Moderate', 'Severe', and 'Death'. However, the reported DoH is not consistent across hospitals and can be unreliable BIBREF6.
The lack of reliability of the recorded DoH poses a challenge when training supervised models. Given the size of the dataset, it is not realistic to ask medics to re-evaluate incidents manually. Instead, we use the publicly available `Learning from mistakes league table' based on NHS staff survey data to identify organisations (NHS Trusts) with `outstanding' (O) and `poor reporting culture' (PRC). Our hypothesis is that training our classifiers on records from organisations with better rankings in the league table should lead to improved prediction. If there is a real disparity in the manual classification among organisations, only incidents labelled by O-ranked Trusts should be regarded as a `ground truth'.
Using free-text descriptions to predict the degree of harm of patient safety incidents with a supervised classifier ::: Supervised classification of degree of harm
We study NRLS incidents reported between 2015 and 2017 from O-ranked and PRC-ranked Trusts. The 2015-17 NRLS dataset is very unbalanced: there are 2,038,889 “No harm” incidents against only 6,754 “Death” incidents. To tackle this issue, we sample our dataset as recommended by BIBREF8, and randomly select 1,016 records each of `No harm' , `Low', and `Moderate', and 508 records each of `Severe' and `Death' incidents, from each type of Trust. We thus obtain two datasets (O and PRC) consisting of a total of 4,064 incidents each.
For each dataset (O and PRC), we train three classifiers (Ridge, Support Vector Machine with a linear kernel, and Random Forest) with five-fold cross validation, and we compute the F-1 scores of each fold to evaluate the model performance. We first train models using three categories from the reports: location (L), external hand-coded category (C), and medical specialty (S). We also compute the performance of models trained on text features, both TF-iDF and Doc2Vec. We also study models trained on a mixture of text and categories. Finally, we run Markov Stability as described above to obtain cluster labels for each dataset (O and PRC) at different resolutions (70, 45, 30 and 13 communities). We then evaluate if it is advantageous to include the labels of the MS clusters as additional features.
Table TABREF34 presents the results of our numerical experiments. Our first observation is that, for this data, SVM with linear kernel has the best performance (similar to Ridge), and Random Forests perform poorly in general. There are several conclusions from our study. First, there is a consistent difference between the scores of the O and PRC datasets (ranging from 1.7% to 11.2% for an average of 5.6%), thus confirming our hypothesis that automated classification performs better when training with data from organizations with better rankings in the league table. Second, using text features is highly advantageous in predicting the degree of harm compared to category alone: there is a substantial increase of up to 100% in the F1 score between column 1 (all three categories) and column 2 (Tf-iDF). Furthermore, adding categorical features (L, C, or S) to the TF-iDF text features improves the scores only marginally (around 2%), as seen by comparing columns 3–6 with column 2.
Given the demonstrated importance of text, we studied the effect of using more refined textual features for classification. In columns 7-10, we considered the effect of adding to TF-iDF the MS labels extracted from our text analysis (as described above), and we find a larger improvement of around 7% with respect to mere TF-iDF (column 2). The improvement is larger for finer clusterings into 70 and 45 communities, which contain enough detail that can be associated with levels of risk (e.g., type of accident). This supports the value of the multi-resolution groupings we have extracted through our analysis.
We also studied the impact of using Doc2Vec vectors as features. Interestingly, the comparison between columns 2 and 11 shows that there is only a slight improvement of 2% when using Doc2Vec instead of TF-iDF features for the case of records from O-ranked institutions, but the improvement is of 12% for the records from PRC Trusts. This differences suggests that the usage of terms is more precise in O-ranked hospitals so that the differences between TF-iDF are minimised, while the advantages of the syntactic and semantic reconstruction of the Doc2Vec embedding becomes more important in the case of PRC Trusts.
Based on these findings, we build our final model that uses a Support Vector Machine classifier with both Doc2Vec embeddings and the MS labels for 30 content clusters (encoded via a One-Hot encoder) as features. We choose to keep only 30 communities as this performs well when combined with the Doc2Vec embedding (without slowing too much the classifier). We performed a grid search to optimise the hyperparameters of our model (penalty = 10, tolerance for stopping criterion = 0.0001, linear kernel). For the O-ranked records, our model achieves a weighted F1 score of 0.657, with a 19% improvement with respect to TF-iDF text features and a 107% improvement with respect to categorical features. (For the PRC records, the corresponding improvements are 33% and 215%, respectively.) Note that similar improvements are also obtained for the other classifiers when using Doc2Vec and MS labels as features. It is also worth noting that the differences in the prediction of DoH between PRC and O-ranked records is reduced when using text tools and, specifically, the F1-score of the SVM classifier based on Doc2Vec with MS is almost the same for both datasets. Hence the difference in the quality of the reporting categories can be ameliorated by the use of the textual content of the reports. We summarise the main comparison of the performance of the SVM classifier based on categorical, raw text, and text with content for both datasets in Figure FIGREF35.
Examination of the types of errors and ex novo re-classification by clinicians:
A further analysis of the confusion matrices used to compute the F1 score reveals that most of the errors of our model are concentrated in the `No harm', `Low harm' and `Moderate harm' categories, whereas fewer errors are incurred in the `Severe harm' and `Death' categories. Therefore, our method is more likely to return false alarms rather than missing important and harmful incidents.
In order to have a further evaluation of our results, we asked three clinicians to analyse ex novo a randomly chosen sample of 135 descriptions of incidents, and to determine their degree of harm based on the information in the incident report. The sample was selected from the O-ranked dataset and no extra information apart from the text was provided. We then compared the DoH assigned by the clinicians with both the results of our classifier and the recorded DoH in the dataset.
Remarkably, the agreement rate of the clinicians' assessment with the recorded DoH was surprisingly low. For example, the agreement in the `No Harm' incidents was only 38%, and in the `Severe' incidents only 49%. In most cases, though, the disparities amounted to switching the DoH by one degree above or below. To reduce this variability, we analysed the outcomes in terms of three larger groups: `No Harm' and `Low Harm' incidents were considered as one outcome; `Moderate Harm' was kept separate; and `Severe Harm' and `Death' were grouped as one outcome, since they both need to be notified to NHS safety managers.
The results are presented in Table TABREF36. Our classification agrees as well as the pre-existing DoH in the dataset with the ex novo assessment of the clinicians, but our method has higher agreement in the severe and deadly incidents. These results confirm that our method performs as well as the original annotators but is better at identifying risky events.
Discussion
We have applied a multiscale graph partitioning algorithm (Markov Stability) to extract content-based clusters of documents from a textual dataset of incident reports in an unsupervised manner at different levels of resolution. The method uses paragraph vectors to represent the records and analyses the ensuing similarity graph of documents through multi-resolution capabilities to capture clusters without imposing a priori their number or structure. The different levels of resolution found to be relevant can be chosen by the practitioner to suit the requirements of detail for each specific task. For example, the top level categories of the pre-defined classification hierarchy are highly diverse in size, with large groups such as `Patient accident', `Medication', `Clinical assessment', `Documentation', `Admissions/Transfer' or `Infrastructure' alongside small, specific groups such as `Aggressive behaviour', `Patient abuse', `Self-harm' or `Infection control'. Our multi-scale partitioning finds additional subcategories with medical detail within some of the large categories (Fig. FIGREF22 and FIGREF23).
Our a posteriori analysis showed that the method recovers meaningful clusters of content as measured by the similarity of the groups against the hand-coded categories and by the intrinsic topic coherence of the clusters. The clusters have high medical content, thus providing complementary information to the externally imposed classification categories. Indeed, some of the most relevant and persistent communities emerge because of their highly homogeneous medical content, even if they cannot be mapped to standardised external categories.
An area of future research will be to confirm if the finer unsupervised cluster found by our analysis are consistent with a second level in the hierarchy of external categories (Level 2, around 100 categories), which is used less consistently in hospital settings. The use of content-driven classification of reports could also be important within current efforts by the World Health Organisation (WHO) under the framework for the International Classification for Patient Safety (ICPS) BIBREF9 to establish a set of conceptual categories to monitor, analyse and interpret information to improve patient care.
We have used our clusters within a supervised classifier to predict the degree of harm of an incident based only on free-text descriptions. The degree of harm is an important measure in hospital evaluation and has been shown to depend on the reporting culture of the particular organisation. Overall, our method shows that text description complemented by the topic labels extracted by our method show improved performance in this task. The use of such enhanced NLP tools could help improve reporting frequency and quality, in addition to reducing burden to staff, since most of the necessary information can be retrieved automatically from text descriptions. Further work, would aim to add interpretability to the supervised classification BIBREF57, so as to provide medical staff with a clearer view of the outcomes of our method and to encourage its uptake.
One of the advantages of a free text analytical approach is the provision, in a timely manner, of an intelligible description of incident report categories derived directly from the 'words' of the reporters themselves. Insights from the analysis of such free text entries can add rich information than would have not otherwise been obtained from pre-defined classes. Not only could this improve the current state of play where much of the free text of these reports goes unused, but by avoiding the strict assignment to pre-defined categories of fixed granularity free text analysis could open an opportunity for feedback and learning through more nuanced classifications as a complementary axis to existing approaches.
Currently, local incident reporting systems used by hospitals to submit reports to the NRLS require risk managers to improve data quality, due to errors or uncertainty in categorisation. The application of free text analytical approaches has the potential to free up time from this labour-intensive task, focussing instead in quality improvement derived from the content of the data itself. Additionally, the method allows for the discovery of emerging topics or classes of incidents directly from the data when such events do not fit existing categories by using methods for anomaly detection to decide whether new topic clusters should be created. This is a direction of future work.
Further work also includes the use of our method to enable comparisons across healthcare organisations and also to monitor changes in their incident reports over time. Another interesting direction is to provide online classification suggestions to users based on the text they input as an aid with decision support and data collection, which can also help fine-tune the predefined categories. Finally, it would be interesting to test if the use of deep learning algorithms can improve our classification scores.
We thank Elias Bamis, Zijing Liu and Michael Schaub for helpful discussions. This research was supported by the National Institute for Health Research (NIHR) Imperial Patient Safety Translational Research Centre and NIHR Imperial Biomedical Research Centre. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health. All authors acknowledge support from the EPSRC through award EP/N014529/1 funding the EPSRC Centre for Mathematics of Precision Healthcare. | they are used as additional features in a supervised classification task |
dbdf13cb4faa1785bdee90734f6c16380459520b | dbdf13cb4faa1785bdee90734f6c16380459520b_0 | Q: What cluster identification method is used in this paper?
Text: Introduction
The vast amounts of data collected by healthcare providers in conjunction with modern data analytics present a unique opportunity to improve the quality and safety of medical care for patient benefit BIBREF1. Much recent research in this area has been on personalised medicine, with the aim to deliver improved diagnostic and treatment through the synergistic integration of datasets at the level of the individual. A different source of healthcare data pertains to organisational matters. In the United Kingdom, the National Health Service (NHS) has a long history of documenting the different aspects of healthcare provision, and is currently in the process of making available properly anonymised datasets, with the aim of leveraging advanced analytics to improve NHS services.
One such database is the National Reporting and Learning System (NRLS), a repository of patient safety incident reports from the NHS in England and Wales set up in 2003, which now contains over 13 million records. The incidents are reported under standardised categories and contain both organisational and spatio-temporal information (structured data) and a substantial component of free text (unstructured data) where incidents are described in the `voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission or discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into complex processes in healthcare with a view towards service improvement.
Although statistical analyses are routinely performed on the structured data (dates, locations, hand-coded categories, etc), free text is typically read manually and often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. These limitations are due to a lack of methodologies that can provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Automatic categorisation of incidents from free text would sidestep human error and difficulties in assigning incidents to a priori pre-defined lists in the reporting system. Such tools can also offer unbiased insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services.
In this work, we showcase an algorithmic methodology that detects content-based groups of records in an unsupervised manner, based only on the free (unstructured) textual descriptions of the incidents. To do so, we combine deep neural-network high-dimensional text-embedding algorithms with graph-theoretical methods for multiscale clustering. Specifically, we apply the framework of Markov Stability (MS), a multiscale community detection algorithm, to sparsified graphs of documents obtained from text vector similarities. Our method departs both from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents BIBREF2, and from more recent approaches that have used deep neural network based language models, but have used k-means clustering without a graph-based analysis BIBREF3. Previous applications of network theory to text analysis have included the work of Lanchichinetti and co-workers BIBREF4, who proposed a probabilistic graph construction analysed with the InfoMap algorithm BIBREF5; however, their community detection was carried out at a single-scale and the BoW representation of text lacks the power of text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than from pre-designed classifications. The obtained results can help mitigate human error or effort in finding the right category in complex classification trees. We illustrate in our analysis the insight gained from this unsupervised, multi-resolution approach in this specialised corpus of medical records.
As an additional application, we use machine learning methods for the prediction of the degree of harm of incidents directly from the text in the NRLS incident reports. Although the degree of harm is recorded by the reporting person for every event, this information can be unreliable as reporters have been known to game the system, or to give different answers depending on their professional status BIBREF6. Previous work on predicting the severity of adverse events BIBREF7, BIBREF8 used reports submitted to the Advanced Incident Management System by Australian public hospitals, and used BoW and Support Vector Machines (SVMs) to detect extreme-risk events. Here we demonstrate that publicly reported measures derived from NHS Staff Surveys can help select ground truth labels that allow supervised training of machine learning classifiers to predict the degree of harm directly from text embeddings. Further, we show that the unsupervised clusters of content derived with our method improve the classification results significantly.
An a posteriori manual labelling by three clinicians agree with our predictions based purely on text almost as much as with the original hand-coded labels. These results indicate that incidents can be automatically classified according to their degree of harm based only on their textual descriptions, and underlines the potential of automatic document analysis to help reduce human workload.
Introduction ::: Data description
The full dataset includes more than 13 million confidential reports of patient safety incidents reported to the National Reporting and Learning System (NRLS) between 2004 and 2016 from NHS trusts and hospitals in England and Wales. Each record has more than 170 features, including organisational details (e.g., time, trust code and location), anonymised patient information, medication and medical devices, among many other details. In most records, there is also a detailed description of the incident in free text, although the quality of the text is highly variable.
The records are manually classified by operators according to a two-level system of incident types. The top level contains 15 categories including general classes such as `Patient accident', `Medication', `Clinical assessment', `Documentation', `Admissions/Transfer' or `Infrastructure', alongside more specific groups such as `Aggressive behaviour', `Patient abuse', `Self-harm' or `Infection control'.
Each record is also labelled based on the degree of harm to the patients as one of: `No Harm', `Low Harm', `Moderate Harm', `Severe Harm' or `Death'. These degrees are precisely defined by the WHO BIBREF9 and the NHS BIBREF10.
Graph-based framework for text analysis and clustering
Our framework combines text-embedding, geometric graph construction and multi-resolution community detection to identify, rather than impose, content-based clusters from free, unstructured text in an unsupervised manner.
Figure FIGREF2 shows a summary of our pipeline. First, we pre-process each document to transform text into consecutive word tokens, with words in their most normalised forms and some words removed if they have no distinctive meaning when used out of context BIBREF11, BIBREF12. We then train a paragraph vector model using the Document to Vector (Doc2Vec) framework BIBREF13 on the full set (13 million) of pre-processed text records. (Training a vector model on smaller sets of 1 million records also produces good results as seen in Table TABREF5). This training step of the text model is only done once.
The trained Doc2Vec model is subsequently used to infer high-dimensional vector descriptions for the text of each document in our target analysis set. We then compute a matrix containing all the pairwise (cosine) similarities between the Doc2Vec document vectors. This similarity matrix can be thought of as the adjacency matrix of a full, weighted graph with documents as nodes and edges weighted by their similarity. We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity. MS uses a diffusive process on the graph to reveal the multiscale organisation at different resolutions without the need to choose a priori the number or type of clusters.
The partitions found by MS across levels of resolution are analysed a posteriori through visualisations and quantitative scores. The visualisations include: (i) word clouds to summarise the main content; (ii) graph layouts; and (iii) Sankey diagrams and contingency tables that capture correspondences between partitions. The quantitative scores include: (i) the intrinsic topic coherence (measured by the pairwise mutual information BIBREF19, BIBREF20); and (ii) the similarity to hand-coded categories (measured by the normalised mutual information BIBREF21).
Our framework also covers prediction of the degree of harm (DoH) caused to the patient usig text embeddings and the unsupervised cluster assignments obtaind from our multiscale graph partitioning. To perform this task, we use the hand-coded DoH from the NRLS to train three commonly used classifiers BIBREF22, BIBREF23 (Ridge, Support Vector Machine with a linear kernel, Random Forest) to predict the DoH using TF-iDF and Doc2Vec embeddings of the text and our MS cluster assignments. The classifiers are then evaluated in predicting the DoH using cross-validation.
We now explain the steps of the methodological pipeline in more detail.
Graph-based framework for text analysis and clustering ::: Text Preprocessing
Text preprocessing is important to enhance the performance of text embedding techniques. We applied standard preprocessing to the raw text of all 13 million records in our corpus, as follows. We divide our documents into iterative word tokens using the NLTK library BIBREF11 and remove punctuation and digit-only tokens. We then apply word stemming using the Porter algorithm BIBREF12, BIBREF24. If the Porter method cannot find a stemmed version for a token, we apply the Snowball algorithm BIBREF25. Finally, we remove any stop-words (repeat words with low content) using NLTK's stop-word list. Although pre-processing reduces some of the syntactic information, it consolidates the semantic information of the vocabulary. We note that the incident descriptions contain typos and acronyms, which have been left uncorrected to avoid manual intervention or the use of spell checkers, so as to mimic as closely as possible a realistic scenario.
Graph-based framework for text analysis and clustering ::: Text Vector Embedding
Computational text analysis relies on a mathematical representation of the base units of text (character $n$-grams, words or documents). Since our methodology is unsupervised, we avoid the use of labelled data, in contrast to supervised or semi-supervised classification methods BIBREF26, BIBREF27. In our work, we use a representation of text documents as vectors following recent developments in the field.
Traditionally, bag-of-words (BoW) methods represented documents as vectors of word frequencies weighted by inverse document frequency (TF-iDF). Such methods provide a statistical description of documents but they do not carry information about the order or proximity of words to each other and hence disregard semantic or syntactic relationships between words. In addition, BoW representations carry little information content as they tend to be high-dimensional and very sparse, due to the large size of word dictionaries and low frequencies of many terms.
Recently, deep neural network language models have successfully overcome the limitations of BoW methods by incorporating neighbourhoods in the mathematical description of each term. Distributed Bag of Words (DBOW), better known as Doc2Vec BIBREF13, is a form of Paragraph Vectors (PV) which creates a model that represents any word sequence (i.e. sentences, paragraphs, documents) as $d$-dimensional vectors, where $d$ is user-defined (typically $d=300$). Training a Doc2Vec model starts with a random $d$-dimensional vector assignment for each document in the corpus. A stochastic gradient descent algorithm iterates over the corpus with the objective of predicting a randomly sampled set of words from each document by using only the document's $d$-dimensional vector BIBREF13. The objective function being optimised by PV-DBOW is similar to the skip-gram model in Refs. BIBREF28, BIBREF29. Doc2Vec has been shown BIBREF30 to capture both semantic and syntactic characterisations of the input text, and outperforms BoW-based models such as LDA BIBREF2.
Benchmarking the Doc2Vec training: Here, we use the Gensim Python library BIBREF31 to train the PV-DBOW model. The Doc2Vec training was repeated several times with a variety of training hyper-parameters (chosen based on our own numerical experiments and the general guidelines provided by BIBREF32) in order to optimise the output. To characterise the usability and quality of models, we trained Doc2Vec models using text corpora of different sizes and content with different sets of hyper-parameters. . In particular, we checked the effect of corpus size by training Doc2Vec models on the full 13 million NRLS records and on randomly sampled subsets of 1 million and 2 million records.
Since our target analysis has heavy medical content and specific use of words, we also tested the importance of the training corpus by generating an additional Doc2Vec model using a set of 5 million articles from the English Wikipedia representing standard, generic English usage, which works well in the analysis of news articles BIBREF33.
The results in Table TABREF5 show that training on the highly specific text from the NRLS records is an important ingredient in the successful vectorisation of the documents, as shown by the degraded performance for the Wikipedia model across a variety of training hyper-parameters. On the other hand, reducing the size of the corpus from 13 million to 1 million records did not affect the benchmarking dramatically. This robustness of the results to the size of the training corpus was confirmed further with the use of more detailed metrics, as discussed below in Section SECREF27 (see e.g., Figure FIGREF29).
Based on our benchmarking, henceforth we use the Doc2Vec model trained on the 13+ million NRLS records with the following hyper-parameters: {training method = dbow, number of dimensions for feature vectors size = 300, number of epochs = 10, window size = 15, minimum count = 5, number of negative samples = 5, random down-sampling threshold for frequent words = 0.001 }. As an indication of computational cost, the training of this model takes approximately 11 hours (run in parallel with 7 threads) on shared servers.
Graph-based framework for text analysis and clustering ::: Similarity graph of documents from text similarities
Once the Doc2Vec model is trained, we use it to infer a vector for each record in our analysis subset and construct $\hat{S}$, a similarity matrix between the vectors by: computing the matrix of cosine similarities between all pairs of records, $S_\text{cos}$; transforming it into a distance matrix $D_{cos} = 1-S_{cos}$; applying element-wise max norm to obtain $\hat{D}=\Vert D_{cos}\Vert _{max}$; and normalising the similarity matrix $\hat{S} = 1-\hat{D}$ which has elements in the interval $[0,1]$.
This similarity matrix can be thought of as the adjacency matrix of a fully connected weighted graph. However, such a graph contains many edges with small weights reflecting the fact that in high-dimensional noisy data even the least similar nodes present a substantial degree of similarity. Indeed, such weak similarities are in most cases redundant and can be explained through stronger pairwise similarities. These weak, redundant edges obscure the graph structure, as shown by the diffuse visualisation in Figure FIGREF7A.
To reveal the graph structure, we sparsify the similarity matrix to obtain a MST-kNN graph BIBREF14 based on a geometric heuristic that preserves the global connectivity of the graph while retaining details about the local geometry of the dataset. The MST-kNN algorithm starts by computing the minimum spanning tree (MST) of the full matrix $\hat{D}$, i.e., the tree with $(N-1)$ edges connecting all nodes in the graph with minimal sum of edge weights (distances). The MST is computed using the Kruskal algorithm implemented in SciPy BIBREF34. To this MST, we add edges connecting each node to its $k$ nearest nodes (kNN) if they are not already in the MST. Here $k$ is an user-defined parameter that regulates the sparsity of the resulting graph. The binary adjacency matrix of the MST-kNN graph is Hadamard-multiplied with $\hat{S}$ to give the adjacency matrix $A$ of the weighted, undirected sparsified graph.
The network visualisations in Figure FIGREF7 give an intuitive picture of the effect of sparsification as $k$ is decreased. If $k$ is very small, the graph is very sparse but not robust to noise. As $k$ is increased, the local similarities between documents induce the formation of dense subgraphs (which appear closer in the graph visualisation layout). When the number of neighbours becomes too large, the local structure becomes diffuse and the subgraphs lose coherence, signalling the degradation of the local graph structure. Relatively sparse graphs that preserve important edges and global connectivity of the dataset (guaranteed here by the MST) have computational advantages when using community detection algorithms.
Although we use here the MST-kNN construction due to its simplicity and robustness, network inference, graph sparsification and graph construction from data is an active area of research, and several alternatives exist based on different heuristics, e.g., Graphical Lasso BIBREF35, Planar Maximally Filtered Graph BIBREF36, spectral sparsification BIBREF37, or the Relaxed Minimum Spanning Tree (RMST) BIBREF38. We have experimented with some of those methods and obtained comparable results. A detailed comparison of sparsification methods as well as the choice of distance in defining the similarity matrix $\hat{S}$ is left for future work.
Graph-based framework for text analysis and clustering ::: Multiscale Graph Partitioning
Community detection encompasses various graph partitioning approaches which aim to find `good' partitions into subgraphs (or communities) according to different cost functions, without imposing the number of communities a priori BIBREF39. The notion of community depends on the choice of cost function. Commonly, communities are taken to be subgraphs whose nodes are connected strongly within the community with relatively weak inter-community edges. Such structural notion is related to balanced cuts. Other cost functions are posed in terms of transitions inside and outside of the communities, usually as one-step processes BIBREF5. When transition paths of all lengths are considered, the concept of community becomes intrinsically multi-scale, i.e., different partitions are relevant at different time scales leading to a multi-level description dictated by the transition dynamics BIBREF15, BIBREF40, BIBREF16. This leads to the framework of Markov Stability (MS), a dynamics-based, multi-scale community detection methodology, which recovers several well-known heuristics as particular cases BIBREF15, BIBREF17, BIBREF18.
MS is an unsupervised community detection method that finds robust and stable partitions of a graph (and the associated communities) under the evolution of a continuous-time diffusion process without a priori choice of the number or type of communities or their relative relationships BIBREF15, BIBREF40, BIBREF16, BIBREF41 . In simple terms, MS can be understood by analogy to a drop of ink diffusing on the graph: the ink diffuses homogeneously unless the graph has intrinsic sub-structures, in which case the ink gets transiently contained, over particular time scales, within groups of nodes. The existence of such transients indicates a natural scale to partition the graph along the subgraphs (or communities) where the diffusion is transiently trapped. As the process continues to evolve, the ink diffuses out of those communities but might get transiently contained in other, larger subgraphs, if such multi-level structure exists. By analysing the Markov dynamics over time, MS detects the structure of the graph across scales. If a graph has no natural scales for partitioning, then MS returns no communities. The Markov time $t$ thus acts as a resolution parameter that allows us to extract robust partitions that persist over particular time scales, in an unsupervised manner.
Mathematically, given the adjacency matrix $A_{N \times N}$ of the graph obtained as described previously, let us define the diagonal matrix $D=\text{diag}(\mathbf {d})$, where $\mathbf {d}=A \mathbf {1}$ is the degree vector. The random walk Laplacian matrix is defined as $L_\text{RW}=I_N-D^{-1}A$, where $I_N$ is the identity matrix of size $N$ and the transition matrix (or kernel) of the associated continuous-time Markov process is $P(t)=e^{-t L_\text{RW}}, \, t>0$ BIBREF16. Any partition $\mathcal {H}$ into $C$ clusters is associated with a binary membership matrix $H_{N \times C}$ that maps the $N$ nodes into the clusters. Below, we will use the matrix $H$ to denote the corresponding partition $\mathcal {H}$. We can then compute the $C\times C$ clustered autocovariance matrix:
where $\pi $ is the steady-state distribution of the process and $\Pi =\text{diag}(\pi )$. The element $[R(t,H)]_{\alpha \beta }$ quantifies the probability that a random walker starting from community $\alpha $ at $t=0$ will be in community $\beta $ at time $t$, minus the probability that this event occurs by chance at stationarity.
The above definitions allow us to introduce our cost function measuring the goodness of a partition over time $t$, termed the Markov Stability of partition $H$:
A partition $H$ that maximises $r(t,H)$ is comprised of communities that preserve the flow within themselves over time $t$, since in that case the diagonal elements of $R(t,H)$ will be large and the off-diagonal elements will be small. For details, see BIBREF15, BIBREF40, BIBREF16, BIBREF42.
Our computational algorithm thus searches for partitions at each Markov time $t$ that maximise $r(t,H)$. Although the maximisation of (DISPLAY_FORM11) is an NP-hard problem (hence with no guarantees for global optimality), there are efficient optimisation methods that work well in practice. Our implementation here uses the Louvain Algorithm BIBREF43, BIBREF18 which is efficient and known to give good results when applied to benchmarks. To obtain robust partitions, we run the Louvain algorithm 500 times with different initialisations at each Markov time and pick the best 50 with the highest Markov Stability value $r(t,H)$. We then compute the variation of information BIBREF44 of this ensemble of solutions $VI(t)$, as a measure of the reproducibility of the result under the optimisation. In addition, we search for partitions that are persistent across time $t$, as given by low values of the variation of information between optimised partitions across time $VI(t,t^{\prime })$. Robust partitions are therefore indicated by Markov times where $VI(t)$ shows a dip and $VI(t,t^{\prime })$ has an extended plateau with low values, indicating consistency under the optimisation and validity over extended scales BIBREF42, BIBREF16. Below, we apply MS to find partitions across scales of the similarity graph of documents, $A$. The communities detected correspond to groups of documents with similar content at different levels of granularity.
Graph-based framework for text analysis and clustering ::: Visualisation and interpretation of the results
Graph layouts: We use the ForceAtlas2 BIBREF45 layout algorithm to represent graphs on the plane. This layout assigns a harmonic spring to each edge and finds through iterative rearrangements finds an arrangement on the plane that balances attractive and repulsive forces between nodes. Hence similar nodes tend to appear close together on this layout. We colour the nodes by either hand-coded categories (Figure FIGREF7) or multiscale MS communities (Figure FIGREF21). Spatially coherent colourings on this layout imply good clusters in terms of the similarity graph.
Tracking membership through Sankey diagrams: Sankey diagrams allow us to visualise the relationship of node membership across different partitions and with respect to the hand-coded categories. Two-layer Sankey diagrams (e.g., Fig. FIGREF22) reflect the correspondence between MS clusters and the hand-coded external categories, whereas we use a multilayer Sankey diagram in Fig. FIGREF21 to present the multi-resolution MS community detection across scales.
Normalised contingency tables: To capture the relationship between our MS clusters and the hand-coded categories, we also provide a complementary visualisation as z-score heatmaps of normalised contingency tables, e.g., Fig. FIGREF22. This allows us to compare the relative association of content clusters to the external categories at different resolution levels. A quantification of the overall correspondence is also provided by the $NMI$ score in Eq. (DISPLAY_FORM17).
Word clouds of increased intelligibility through lemmatisation: Our method clusters text documents according to their intrinsic content. This can be understood as a type of topic detection. To visualise the content of clusters, we use Word Clouds as basic, yet intuitive, summaries of information to extract insights and compare a posteriori with hand-coded categories. They can also provide an aid for monitoring results when used by practitioners.
The stemming methods described in Section SECREF3 truncate words severely to enhance the power of the language processing computational methods by reducing the redundancy in the word corpus. Yet when presenting the results back to a human observer, it is desirable to report the cluster content with words that are readily comprehensible. To generate comprehensible word clouds in our a posteriori analyses, we use a text processing method similar to the one described in BIBREF46. Specifically, we use the part of speech (POS) tagging module from NLTK to leave out sentence parts except the adjectives, nouns, and verbs. We also remove less meaningful common verbs such as `be', `have', and `do' and their variations. The remaining words are then lemmatised in order to normalise variations of the same word. Finally, we use the Python library wordcloud to create word clouds with 2 or 3-gram frequency list of common word groups.
Graph-based framework for text analysis and clustering ::: Quantitative benchmarking of topic clusters
Although our dataset has a classification hand-coded by a human operator, we do not use it in our analysis. Indeed, one of our aims is to explore the relevance of the fixed external classes as compared to content-driven groupings obtained in an unsupervised manner. Therefore we provide a double route to quantify the quality of the clusters by computing two complementary measures: (i) an intrinsic measure of topic coherence, and (ii) a measure of similarity to the external hand-coded categories.
Topic coherence of text: As an intrinsic measure of consistency of word association, we use the pointwise mutual information ($PMI$) BIBREF19, BIBREF47. The $PMI$ is an information-theoretical score that captures the probability of words being used together in the same group of documents. The $PMI$ score for a pair of words $(w_1,w_2)$ is:
where the probabilities of the words $P(w_1)$, $P(w_2)$, and of their co-occurrence $P(w_1 w_2)$ are obtained from the corpus. We obtain an aggregate $\widehat{PMI}$ for the graph partition $C=\lbrace c_i\rbrace $ by computing the $PMI$ for each cluster, as the median $PMI$ between its 10 most common words (changing the number of words gives similar results), and computing the weighted average of the $PMI$ cluster scores:
where $c_i$ denotes the clusters in partition $C$, each with size $n_i$, so that $N=\sum _{c_i \in C} n_i$ is the total number of nodes. Here $S_i$ denotes the set of top 10 words for cluster $c_i$.
The $PMI$ score has been shown to perform well BIBREF19, BIBREF47 when compared to human interpretation of topics on different corpora BIBREF48, BIBREF49, and is designed to evaluate topical coherence for groups of documents, in contrast to other tools aimed at short forms of text. See BIBREF26, BIBREF27, BIBREF50, BIBREF51 for other examples.
Here, we use the $\widehat{PMI}$ score to evaluate partitions without any reference to an externally labelled `ground truth'.
Similarity between the obtained partitions and the hand-coded categories: To quantify how our content-driven unsupervised clusters compare against the external classification, we use the normalised mutual information ($NMI$), a well-known information-theoretical score that quantifies the similarity between clusterings considering correct and incorrect assignments in terms of the information between the clusterings. The NMI between two partitions $C$ and $D$ of the same graph is:
where $I(C,D)$ is the Mutual Information and $H(C)$ and $H(D)$ are the entropies of the two partitions.
The $NMI$ is bounded ($0 \le NMI \le 1$) and a higher value corresponds to higher similarity of the partitions (i.e., $NMI=1$ when there is perfect agreement between partitions $C$ and $D$). The $NMI$ score is directly related to the V-measure in the computer science literature BIBREF52.
Graph-based framework for text analysis and clustering ::: Supervised Classification for Degree of Harm
As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident directly from the text and the hand-coded features (e.g., external category, medical specialty, location). A one-hot encoding is applied to turn these categorical values into numerical ones. We also checked if using our unsupervised content-driven cluster labels as additional features can improve the performance of the supervised classification.
The supervised classification was carried out by training on features and text three classifiers commonly applied to text classification tasks BIBREF22, BIBREF23: a Ridge classifier, Support Vector Machines with a linear kernel, and Random Forests. The goal is to predict the degree of harm (DoH) among five possible values (1-5). The classification is carried out with five-fold cross validation, using 80% of the data to train the model and the remaining 20% to test it. As a measure of performance of the classifiers and models, we use the weighted average of the F1 score for all levels of DoH, which takes into account both precision and recall, i.e., both the exactness and completeness of the model.
Application to the clustering of hospital incident text reports
We showcase our methodology through the analysis of the text from NRLS patient incident reports. In addition to textual descriptions, the reports are hand-coded upon reporting with up to 170 features per case, including a two-level manual classification of the incidents.
Here, we only use the text component and apply our graph-based text clustering to a set of 3229 reports from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014. As summarised in Figure FIGREF2, we start by training our Doc2Vec text embedding using the full 13+ million records collected by the NRLS since 2004 (although, as discussed above, a much smaller corpus of NRLS documents can be used). We then infer vectors for our 3229 records, compute the cosine similarity matrix and construct an MST-kNN graph with $k=13$ for our graph-based clustering. (We have confirmed the robustness of the MST-kNN construction in our data for $k>13$ by scanning values of $k \in [1,50]$, see Section SECREF27). We then applied Markov Stability, a multi-resolution graph partitioning algorithm to the MST-kNN graph. We scan across Markov time ($t \in [0.01, 100]$ in steps of 0.01). At each $t$, we run 500 independent Louvain optimisations to select the optimal partition found, as well as quantifying the robustness to optimisation by computing the average variation of information $VI(t)$ between the top 50 partitions. Once the full scan across $t$ is finalised, we compute $VI(t,t^{\prime })$, the variation of information between the optimised partitions found across the scan in Markov time, to select partitions that are robust across scales.
Application to the clustering of hospital incident text reports ::: Markov Stability extracts content clusters at different levels of granularity
Figure FIGREF21 presents a summary of our MS analysis. We plot the number of clusters of the optimal partition and the two metrics of variation of information across all Markov times. The existence of a long plateau in $VI(t,t^{\prime })$ coupled to a dip in $VI(t)$ implies the presence of a partition that is robust both to the optimisation and across Markov time. To illustrate the multi-scale features of the method, we choose several of these robust partitions, from finer (44 communities) to coarser (3 communities), obtained at five Markov times and examine their structure and content. The multi-level Sankey diagram summarises the relationship of the partitions across levels.
The MS analysis of the graph reveals a multi-level structure of partitions, with a strong quasi-hierarchical organisation. We remark that our optimisation does not impose any hierarchical structure a priori, so that the observed consistency of communities across levels is intrinsic to the data and suggests the existence of sub-themes that integrate into larger thematic categories. The unsupervised detection of intrinsic scales by MS enables us to obtain groups of records with high content similarity at different levels of granularity. This capability can be used by practitioners to tune the level of description to their specific needs, and is used below as an aid in our supervised classification task in Section SECREF4.
To ascertain the relevance of the layers of content found by MS, we examined the five levels of resolution in Figure FIGREF21. For each level, we produced lemmatised word clouds, which we used to generate descriptive content labels for the communities. We then compared a posteriori the content clusters with the hand-coded categories through a Sankey diagram and a contingency table. The results are shown in Figures FIGREF22–FIGREF25 for each of the levels.
The partition into 44 communities presents content clusters with well-defined characterisations, as shown by the Sankey diagram and the highly clustered structure of the contingency table (Figure FIGREF22). Compared to the 15 hand-coded categories, this 44-community partition provides finer groupings corresponding to specific sub-themes within the generic hand-coded categories. This is apparent in the hand-coded classes `Accidents', `Medication', `Clinical assessment', `Documentation' and `Infrastructure', where a variety of meaningful subtopics are identified (see Fig. FIGREF23 for details). In other cases, however, the content clusters cut across the external categories, e.g., the clusters on labour ward, chemotherapy, radiotherapy and infection control are coherent in content but can belong to several of the external classes. At this level of resolution, our algorithm also identified highly specific topics as separate content clusters, including blood transfusions, pressure ulcer, consent, mental health, and child protection, which have no direct relationship with the external classes provided to the operator.
Figure FIGREF24A and FIGREF24B present the results for two partitions at medium level of resolution, where the number of communities (12 and 17) is close to that of hand-coded categories (15). As expected from the quasi-hierarchy detected by our multi-resolution analysis, we find that the communities in the 17-way and 12-way partitions emerge from consistent aggregation of the smaller communities in the 44-way partition in Figure FIGREF22. Focussing on the 12-way partition, we see that some of the sub-themes in Figure FIGREF23 are merged into more general topics. An example is Accidents (community 2 in Fig. FIGREF24A), a merger of seven finer communities, which corresponds well with the external category `Patient accidents'. A similar phenomenon is seen for the Nursing cluster (community 1), which falls completely under the external category `Infrastructure'. The clusters related to `Medication' similarly aggregate into a larger community (community 3), yet there still remains a smaller, specific community related to Homecare medication (community 12) with distinct content. Other communities, on the other hand, still strand across external categories. This is clearly observable in communities 10 and 11 (Samples/ lab tests/forms and Referrals/appointments), which fall naturally across the `Documentation' and `Clinical Assessment'. Similarly, community 9 (Patient transfers) sits across the `Admission/Transfer' and `Infrastructure' external categories, due to its relation to nursing and hospital constraints. A substantial proportion of records was hand-coded under the generic `Treatment/Procedure' class, yet MS splits into into content clusters that retain medical coherence, e.g., Radiotherapy (Comm. 4), Blood transfusions (Comm. 7), IV/cannula (Comm. 5), Pressure ulcer (Comm. 8), and the large community Labour ward (Comm. 6).
The medical specificity of the Radiotherapy, Pressure ulcer and Labour ward clusters means that they are still preserved as separate groups to the next level of coarseness in the 7-way partition (Figure FIGREF25A). The mergers in this case lead to a larger communities referring to Medication, Referrals/Forms and Staffing/Patient transfers. Figure FIGREF25B shows the final level of agglomeration into 3 content clusters: records referring to Accidents; a group broadly referring to matters Procedural (referrals, forms, staffing, medical procedures) cutting across external categories; and the Labour ward cluster, still on its own as a subgroup with distinctive content.
This process of agglomeration of content, from sub-themes into larger themes, as a result of the multi-scale hierarchy of MS graph partitions is shown explicitly with word clouds in Figure FIGREF26 for the 17-, 12- and 7-way partitions. Our results show good overall correspondence with the hand-coded categories across resolutions, yet our results also reveal complementary categories of incidents not defined in the external classification. The possibility of tuning the granularity afforded by our method can be used to provide a distinct level of resolution in certain areas corresponding to specialised or particular sub-themes.
Application to the clustering of hospital incident text reports ::: Robustness of the results and comparison with other methods
We have examined quantitatively the robustness of the results to parametric and methodological choices in different steps of our framework. Specifically, we evaluate the effect of: (i) using Doc2Vec embeddings instead of BoW vectors; (ii) the size of corpus for training Doc2Vec; (iii) the sparsity of the MST-kNN graph construction. We have also carried out quantitative comparisons to other methods for topic detection and clustering: (i) LDA-BoW, and (ii) several standard clustering methods.
Doc2Vec provides improved clusters compared to BoW: As compared to standard bag of words (BoW), fixed-sized vector embeddings (Doc2Vec) produces lower dimensional vector representations with higher semantic and syntactic content. Doc2Vec outperforms BoW representations in practical benchmarks of semantic similarity and is less sensitive to hyper-parameters BIBREF30. To quantify the improvement provided by Doc2Vec, we constructed a MST-kNN graph from TF-iDF vectors and ran MS on this TF-iDF similarity graph. Figure FIGREF28 shows that Doc2Vec outperforms BoW across all resolutions in terms of both $NMI$ and $\widehat{PMI}$ scores.
Robustness to the size of the Doc2Vec training dataset : Table TABREF5 indicates a small effect of the size of the training corpus on the Doc2Vec model. To confirm this, we trained two additional Doc2Vec models on sets of 1 million and 2 million records (randomly chosen from the full 13+ million records) and followed the same procedure to construct the MST-kNN graph and carry out the MS analysis. Figure FIGREF29 shows that the performance is affected only mildly by the size of the Doc2Vec training set.
Robustness to the level of graph sparsification:
We sparsify the matrix of cosine similarities using the MST-kNN graph construction. The smaller the value of $k$, the sparser the graph. Sparser graphs have computational advantages for community detection algorithms, but too much sparsification degrades the results. Figure FIGREF30 shows the effect of sparsification in the graph construction on the performance of MS clusters. Our results are robust to the choice of $k$, provided it is not too small: both the $NMI$ and $\widehat{PMI}$ scores reach a similar level for values of $k$ above 13-16. Due to computational efficiency, we favour a relatively small value of $k=13$.
Comparison of MS partitions to Latent Dirichlet Allocation with Bag-of-Words (LDA-BoW): We have compared the MS results to LDA, a widely used methodology for text analysis. A key difference in LDA is that a different model needs to be trained when the number of topics changes, whereas our MS method produces clusterings at all levels of resolution in one go. To compare the outcomes, we trained five LDA models corresponding to the five MS levels in Figure FIGREF21. Table TABREF31 shows that MS and LDA give partitions that are comparably similar to the hand-coded categories (as measured with $NMI$), with some differences depending on the scale, whereas the MS clusters have higher topic coherence (as given by $\widehat{PMI}$) across all scales.
To give an indication of computational cost, we ran both methods on the same servers. Our method takes approximately 13 hours in total (11 hours to train the Doc2Vec model on 13 million records and 2 hours to produce the full MS scan with 400 partitions across all resolutions). The time required to train just the 5 LDA models on the same corpus amounts to 30 hours (with timings ranging from $\sim $2 hours for the 3 topic LDA model to 12.5 hours for the 44 topic LDA model). This comparison also highlights the conceptual difference between our multi-scale methodology and LDA topic modelling. While LDA computes topics at a pre-determined level of resolution, our method obtains partitions at all resolutions in one sweep of the Markov time, from which relevant partitions are chosen based on their robustness. The MS partitions at all resolutions are available for further investigation if so needed.
Comparison of MS to other partitioning and community detection algorithms: We have partitioned the same kNN-MST graph using several well-known algorithms readily available in code libraries (i.e., the iGraph module for Python): Modularity Optimisation BIBREF53, InfoMap BIBREF5, Walktrap BIBREF54, Label Propagation BIBREF55, and Multi-resolution Louvain BIBREF43. Note that, in contrast with our multiscale MS analysis, these methods give just one partition at a particular resolution (or two for the Louvain implementation in iGraph). Figure FIGREF32 shows that MS provides improved or equal results to all those other graph partitioning methods for both $NMI$ and $\widehat{PMI}$ across all scales. Only for very fine resolution (more than 50 clusters) does Infomap, which partitions graphs into small clique-like subgraphs BIBREF40, BIBREF56, provide a slightly improved $NMI$. Therefore, MS finds both relevant and high quality clusterings across all scales by sweeping the Markov time parameter.
Using free-text descriptions to predict the degree of harm of patient safety incidents with a supervised classifier
Here we approach the task of training a supervised classifier that predicts the degree of harm of an incident based on other features of the record (such as location, external category, and medical specialty) and on the textual component of the report. To this end, we use the embedded text vectors and MS cluster labels of the records as features to predict the degree of harm to the patient.
Each NRLS record has more than 170 features filled manually by healthcare staff, including the degree of harm (DoH) to the patient, a crucial assessment of the reported incident. The incident is classified into five levels: 'No harm', 'Low', 'Moderate', 'Severe', and 'Death'. However, the reported DoH is not consistent across hospitals and can be unreliable BIBREF6.
The lack of reliability of the recorded DoH poses a challenge when training supervised models. Given the size of the dataset, it is not realistic to ask medics to re-evaluate incidents manually. Instead, we use the publicly available `Learning from mistakes league table' based on NHS staff survey data to identify organisations (NHS Trusts) with `outstanding' (O) and `poor reporting culture' (PRC). Our hypothesis is that training our classifiers on records from organisations with better rankings in the league table should lead to improved prediction. If there is a real disparity in the manual classification among organisations, only incidents labelled by O-ranked Trusts should be regarded as a `ground truth'.
Using free-text descriptions to predict the degree of harm of patient safety incidents with a supervised classifier ::: Supervised classification of degree of harm
We study NRLS incidents reported between 2015 and 2017 from O-ranked and PRC-ranked Trusts. The 2015-17 NRLS dataset is very unbalanced: there are 2,038,889 “No harm” incidents against only 6,754 “Death” incidents. To tackle this issue, we sample our dataset as recommended by BIBREF8, and randomly select 1,016 records each of `No harm' , `Low', and `Moderate', and 508 records each of `Severe' and `Death' incidents, from each type of Trust. We thus obtain two datasets (O and PRC) consisting of a total of 4,064 incidents each.
For each dataset (O and PRC), we train three classifiers (Ridge, Support Vector Machine with a linear kernel, and Random Forest) with five-fold cross validation, and we compute the F-1 scores of each fold to evaluate the model performance. We first train models using three categories from the reports: location (L), external hand-coded category (C), and medical specialty (S). We also compute the performance of models trained on text features, both TF-iDF and Doc2Vec. We also study models trained on a mixture of text and categories. Finally, we run Markov Stability as described above to obtain cluster labels for each dataset (O and PRC) at different resolutions (70, 45, 30 and 13 communities). We then evaluate if it is advantageous to include the labels of the MS clusters as additional features.
Table TABREF34 presents the results of our numerical experiments. Our first observation is that, for this data, SVM with linear kernel has the best performance (similar to Ridge), and Random Forests perform poorly in general. There are several conclusions from our study. First, there is a consistent difference between the scores of the O and PRC datasets (ranging from 1.7% to 11.2% for an average of 5.6%), thus confirming our hypothesis that automated classification performs better when training with data from organizations with better rankings in the league table. Second, using text features is highly advantageous in predicting the degree of harm compared to category alone: there is a substantial increase of up to 100% in the F1 score between column 1 (all three categories) and column 2 (Tf-iDF). Furthermore, adding categorical features (L, C, or S) to the TF-iDF text features improves the scores only marginally (around 2%), as seen by comparing columns 3–6 with column 2.
Given the demonstrated importance of text, we studied the effect of using more refined textual features for classification. In columns 7-10, we considered the effect of adding to TF-iDF the MS labels extracted from our text analysis (as described above), and we find a larger improvement of around 7% with respect to mere TF-iDF (column 2). The improvement is larger for finer clusterings into 70 and 45 communities, which contain enough detail that can be associated with levels of risk (e.g., type of accident). This supports the value of the multi-resolution groupings we have extracted through our analysis.
We also studied the impact of using Doc2Vec vectors as features. Interestingly, the comparison between columns 2 and 11 shows that there is only a slight improvement of 2% when using Doc2Vec instead of TF-iDF features for the case of records from O-ranked institutions, but the improvement is of 12% for the records from PRC Trusts. This differences suggests that the usage of terms is more precise in O-ranked hospitals so that the differences between TF-iDF are minimised, while the advantages of the syntactic and semantic reconstruction of the Doc2Vec embedding becomes more important in the case of PRC Trusts.
Based on these findings, we build our final model that uses a Support Vector Machine classifier with both Doc2Vec embeddings and the MS labels for 30 content clusters (encoded via a One-Hot encoder) as features. We choose to keep only 30 communities as this performs well when combined with the Doc2Vec embedding (without slowing too much the classifier). We performed a grid search to optimise the hyperparameters of our model (penalty = 10, tolerance for stopping criterion = 0.0001, linear kernel). For the O-ranked records, our model achieves a weighted F1 score of 0.657, with a 19% improvement with respect to TF-iDF text features and a 107% improvement with respect to categorical features. (For the PRC records, the corresponding improvements are 33% and 215%, respectively.) Note that similar improvements are also obtained for the other classifiers when using Doc2Vec and MS labels as features. It is also worth noting that the differences in the prediction of DoH between PRC and O-ranked records is reduced when using text tools and, specifically, the F1-score of the SVM classifier based on Doc2Vec with MS is almost the same for both datasets. Hence the difference in the quality of the reporting categories can be ameliorated by the use of the textual content of the reports. We summarise the main comparison of the performance of the SVM classifier based on categorical, raw text, and text with content for both datasets in Figure FIGREF35.
Examination of the types of errors and ex novo re-classification by clinicians:
A further analysis of the confusion matrices used to compute the F1 score reveals that most of the errors of our model are concentrated in the `No harm', `Low harm' and `Moderate harm' categories, whereas fewer errors are incurred in the `Severe harm' and `Death' categories. Therefore, our method is more likely to return false alarms rather than missing important and harmful incidents.
In order to have a further evaluation of our results, we asked three clinicians to analyse ex novo a randomly chosen sample of 135 descriptions of incidents, and to determine their degree of harm based on the information in the incident report. The sample was selected from the O-ranked dataset and no extra information apart from the text was provided. We then compared the DoH assigned by the clinicians with both the results of our classifier and the recorded DoH in the dataset.
Remarkably, the agreement rate of the clinicians' assessment with the recorded DoH was surprisingly low. For example, the agreement in the `No Harm' incidents was only 38%, and in the `Severe' incidents only 49%. In most cases, though, the disparities amounted to switching the DoH by one degree above or below. To reduce this variability, we analysed the outcomes in terms of three larger groups: `No Harm' and `Low Harm' incidents were considered as one outcome; `Moderate Harm' was kept separate; and `Severe Harm' and `Death' were grouped as one outcome, since they both need to be notified to NHS safety managers.
The results are presented in Table TABREF36. Our classification agrees as well as the pre-existing DoH in the dataset with the ex novo assessment of the clinicians, but our method has higher agreement in the severe and deadly incidents. These results confirm that our method performs as well as the original annotators but is better at identifying risky events.
Discussion
We have applied a multiscale graph partitioning algorithm (Markov Stability) to extract content-based clusters of documents from a textual dataset of incident reports in an unsupervised manner at different levels of resolution. The method uses paragraph vectors to represent the records and analyses the ensuing similarity graph of documents through multi-resolution capabilities to capture clusters without imposing a priori their number or structure. The different levels of resolution found to be relevant can be chosen by the practitioner to suit the requirements of detail for each specific task. For example, the top level categories of the pre-defined classification hierarchy are highly diverse in size, with large groups such as `Patient accident', `Medication', `Clinical assessment', `Documentation', `Admissions/Transfer' or `Infrastructure' alongside small, specific groups such as `Aggressive behaviour', `Patient abuse', `Self-harm' or `Infection control'. Our multi-scale partitioning finds additional subcategories with medical detail within some of the large categories (Fig. FIGREF22 and FIGREF23).
Our a posteriori analysis showed that the method recovers meaningful clusters of content as measured by the similarity of the groups against the hand-coded categories and by the intrinsic topic coherence of the clusters. The clusters have high medical content, thus providing complementary information to the externally imposed classification categories. Indeed, some of the most relevant and persistent communities emerge because of their highly homogeneous medical content, even if they cannot be mapped to standardised external categories.
An area of future research will be to confirm if the finer unsupervised cluster found by our analysis are consistent with a second level in the hierarchy of external categories (Level 2, around 100 categories), which is used less consistently in hospital settings. The use of content-driven classification of reports could also be important within current efforts by the World Health Organisation (WHO) under the framework for the International Classification for Patient Safety (ICPS) BIBREF9 to establish a set of conceptual categories to monitor, analyse and interpret information to improve patient care.
We have used our clusters within a supervised classifier to predict the degree of harm of an incident based only on free-text descriptions. The degree of harm is an important measure in hospital evaluation and has been shown to depend on the reporting culture of the particular organisation. Overall, our method shows that text description complemented by the topic labels extracted by our method show improved performance in this task. The use of such enhanced NLP tools could help improve reporting frequency and quality, in addition to reducing burden to staff, since most of the necessary information can be retrieved automatically from text descriptions. Further work, would aim to add interpretability to the supervised classification BIBREF57, so as to provide medical staff with a clearer view of the outcomes of our method and to encourage its uptake.
One of the advantages of a free text analytical approach is the provision, in a timely manner, of an intelligible description of incident report categories derived directly from the 'words' of the reporters themselves. Insights from the analysis of such free text entries can add rich information than would have not otherwise been obtained from pre-defined classes. Not only could this improve the current state of play where much of the free text of these reports goes unused, but by avoiding the strict assignment to pre-defined categories of fixed granularity free text analysis could open an opportunity for feedback and learning through more nuanced classifications as a complementary axis to existing approaches.
Currently, local incident reporting systems used by hospitals to submit reports to the NRLS require risk managers to improve data quality, due to errors or uncertainty in categorisation. The application of free text analytical approaches has the potential to free up time from this labour-intensive task, focussing instead in quality improvement derived from the content of the data itself. Additionally, the method allows for the discovery of emerging topics or classes of incidents directly from the data when such events do not fit existing categories by using methods for anomaly detection to decide whether new topic clusters should be created. This is a direction of future work.
Further work also includes the use of our method to enable comparisons across healthcare organisations and also to monitor changes in their incident reports over time. Another interesting direction is to provide online classification suggestions to users based on the text they input as an aid with decision support and data collection, which can also help fine-tune the predefined categories. Finally, it would be interesting to test if the use of deep learning algorithms can improve our classification scores.
We thank Elias Bamis, Zijing Liu and Michael Schaub for helpful discussions. This research was supported by the National Institute for Health Research (NIHR) Imperial Patient Safety Translational Research Centre and NIHR Imperial Biomedical Research Centre. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health. All authors acknowledge support from the EPSRC through award EP/N014529/1 funding the EPSRC Centre for Mathematics of Precision Healthcare. | A combination of Minimum spanning trees, K-Nearest Neighbors and Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18 |
73e715e485942859e1db75bfb5f35f1d5eb79d2e | 73e715e485942859e1db75bfb5f35f1d5eb79d2e_0 | Q: How can a neural model be used for a retrieval if the input is the entire Wikipedia?
Text: Introduction
Natural language based consumer products, such as Apple Siri and Amazon Alexa, have found wide spread use in the last few years. A key requirement for these conversational systems is the ability to answer factual questions from the users, such as those about movies, music, and artists.
Most of the current approaches for Question Answering (QA) are based on structured Knowledge Bases (KB) such as Freebase BIBREF0 and Wikidata BIBREF1 . In this setting the question is converted to a logical form using semantic parsing, which is queried against the KB to obtain the answer BIBREF2 , BIBREF3 . However, recent studies have shown that even large curated KBs, such as Freebase, are incomplete BIBREF4 . Further, KBs support only certain types of answer schemas, and constructing and maintaining them is expensive.
On the other hand, there is a vast amount of unstructured knowledge available in textual form from web pages such as Wikipedia, and hence an alternative is to directly answer questions from these documents. In this approach, shown in Figure 1 , articles relevant to the question are first selected (retrieval step). Then, the retrieved articles and question are jointly processed to extract the answer (comprehension step). This retrieval based approach has a longer history than the KB based approach BIBREF5 . It can potentially provide a much wider coverage over questions, and is not limited to specific answer schemas. However, there are still gaps in its performance compared to the KB-based approach BIBREF6 . The comprehension step, which requires parsing information from natural language, is the main bottleneck, though suboptimal retrieval can also lead to lower performance.
Several large-scale datasets introduced recently BIBREF7 , BIBREF8 have facilitated the development of powerful neural models for reading comprehension. These models fall into one of two categories: (1) those which extract answers as a span of text from the document BIBREF9 , BIBREF10 , BIBREF11 (Figure 2 top); (2) those which select the answer from a fixed vocabulary BIBREF12 , BIBREF6 (Figure 2 bottom). Here we argue that depending on the type of question, either (1) or (2) may be more appropriate, and introduce a latent variable mixture model to combine the two in a single end-to-end framework.
We incorporate the above mixture model in a simple Recurrent Neural Network (RNN) architecture with an attention mechanism BIBREF13 for comprehension. In the second part of the paper we focus on the retrieval step for the QA system, and introduce a neural network based ranking model to select the articles to feed the comprehension model. We evaluate our model on WikiMovies dataset, which consists of 200K questions about movies, along with 18K Wikipedia articles for extracting the answers. KV:16 applied Key-Value Memory Neural Networks (KV-MemNN) to the dataset, achieving 76.2% accuracy. Adding the mixture model for answer selection improves the performance to 85.4%. Further, the ranking model improves both precision and recall of the retrieved articles, and leads to an overall performance of 85.8%.
WikiMovies Dataset
We focus on the WikiMovies dataset, proposed by BIBREF6 . The dataset consists of pairs of questions and answers about movies. Some examples are shown in Table 1 .
As a knowledge source approximately 18K articles from Wikipedia are also provided, where each article is about a movie. Since movie articles can be very long, we only use the first paragraph of the article, which typically provides a summary of the movie. Formally, the dataset consists of question-answer pairs $\lbrace (q_j, A_j)\rbrace _{j=1}^J$ and movie articles $\lbrace d_k\rbrace _{k=1}^K$ . Additionally, the dataset includes a list of entities: movie titles, actor names, genres etc. Answers to all the questions are in the entity list. The questions are created by human annotators using SimpleQuestions BIBREF14 , an existing open-domain question answering dataset, and the annotated answers come from facts in two structured KBs: OMDb and MovieLens.
There are two splits of the dataset. The “Full” dataset consists of 200K pairs of questions and answers. In this dataset, some questions are difficult to answer from Wikipedia articles alone. A second version of the dataset, “Wiki Entity” is constructed by removing those QA pairs where the entities in QAs are not found in corresponding Wikipedia articles. We call these splits WikiMovies-FL and WikiMovies-WE, respectively. The questions are divided into train, dev and test such that the same question template does not appear in different splits. Further, they can be categorized into 13 categories, including movie_to_actors, director_to_movies, etc. The basic statistics of the dataset are summarized in Table 2 .
We also note that more than 50% of the entities appear less than 5 times in the training set. This makes it very difficult to learn the global statistics of each entity, necessitating the need to use an external knowledge source.
Comprehension Model
Our QA system answers questions in two steps, as shown in Figure 1 . The first step is retrieval, where articles relevant to the question are retrieved. The second step is comprehension, where the question and retrieved articles are processed to derive answers.
In this section we focus on the comprehension model, assuming that relevant articles have already been retrieved and merged into a context document. In the next section, we will discuss approaches for retrieving the articles.
BIBREF6 , who introduced WikiMovies dataset, used an improved variant of Memory Networks called Key-Value Memory Networks. Instead, we use RNN based network, which has been successfully used in many reading comprehension tasks BIBREF10 , BIBREF9 , BIBREF12 .
WikiMovies dataset has two notable differences from many of the existing comprehension datasets, such as CNN and SQuAD BIBREF10 , BIBREF9 , BIBREF12 . First, with imperfect retrieval, the answer may not be present in the context. We handle this case by using the proposed mixture model. Second, there may be multiple answers to a question, such as a list of actors. We handle this by optimizing a sum of the cross-entropy loss over all possible answers.
We also use attention sum architecture proposed by BIBREF10 , which has been shown to give high performance for comprehension tasks. In this approach, attention scores over the context entities are used as the output. We term this the attention distribution $p_{att}$ , defined over the entities in the context. The mixture model combines this distribution with another output probability distribution $p_{vocab}$ over all the entities in the vocabulary. The intuition behind this is that named entities (such as actors and directors) can be better handled by the attention part, since there are few global statistics available for these, and other entities (such as languages and genres) can be captured by vocabulary part, for which global statistics can be leveraged.
Comprehension model detail
Let $\mathcal {V}$ be the vocabulary consisting of all tokens in the corpus, and $\mathcal {E}$ be the set of entities in the corpus The question is converted to a sequence of lower cased word ids, $(w_i) \in \mathcal {V}$ and a sequence of 0-1 flags for word capitalization, $(c_i) \in \lbrace 0,1\rbrace $ . For each word position $i$ , we also associate an entity id if the i-th word is part of an entity, $e_i \in \mathcal {E}$ (see Figure 3 ). Then, the combined embedding of the i-th position is given by
$$x_i = W_w(w_i) + W_c(c_i) \Vert W_e(e_i), \hspace{7.22743pt} (i=1,\ldots ,L_q), $$ (Eq. 12)
where $\Vert $ is the concatenation of two vectors, $L_q$ is the number of words in a question $q$ , and $W_w, W_c$ and $W_e$ are embedding matrices. Note that if there are no entities at i-th position, $W_e(e_i)$ is set to zero. The context is composed of up to $M$ movie articles concatenated with a special separation symbol. The contexts are embedded in exactly the same way as questions, sharing the embedding matrices.
To avoid overfitting, we use another technique called anonymization. We limit the number of columns of $W_e$ to a relatively small number, $n_e$ , and entity ids are mapped to one of $n_e$ columns randomly (without collision). The map is common for each question/context pair but randomized across pairs. The method is similar to the anonymization method used in CNN / Daily Mail datasets BIBREF8 . emergent:16 showed that such a procedure actually helps readers since it adds coreference information to the system.
Next, the question embedding sequence $(x_i)$ is fed into a bidirectional GRU (BiGRU) BIBREF15 to obtain a fixed length vector $v$
$$v = \overrightarrow{h}_{q}(L_q) \Vert \overleftarrow{h}_{q}(0), $$ (Eq. 13)
where $\overrightarrow{h}_{q}$ and $\overleftarrow{h}_{q}$ are the final hidden states of forward and backward GRUs respectively.
The context embedding sequence is fed into another BiGRU, to produce the output $H_c = [h_{c,1}, h_{c,2}, \ldots h_{c,L_c}]$ , where $L_c$ is the length of the context. An attention score for each word position $i$ is given by
$$s_i \propto \exp ( v^T h_{c,i} ).$$ (Eq. 14)
The probability over the entities in the context is then given by
$$p_{att}(e) \propto \sum _{i \in I(e, c)} s_i,$$ (Eq. 15)
where $I(e,c)$ is the set of word positions in the entity $e$ within the context $c$ .
We next define the probability $p_{vocab}$ to be the probability over the complete set of entities in the corpus, given by
$$p_{vocab}(e) = {\rm Softmax}(V u), $$ (Eq. 16)
where the vector $u$ is given by $u = \sum _{i} s_i h_{c, i}$ . Each row of the matrix $V$ is the coefficient vector for an entity in the vocabulary. It is computed similar to Eq. ( 12 ).
$$V(e) = \sum _{w \in e} W_w(w) + \sum _{c \in e} W_c(c) \Vert W_e(e). $$ (Eq. 17)
The embedding matrices are shared between question and context.
The final probability that an entity $e$ answers the question is given by the mixture $p(e) = (1-g) p_{att}(e) + g p_{vocab}(e)$ , with the mixture coefficient $g$ defined as
$$g = \sigma (W_g g_0), \hspace{7.22743pt} g_0 = v^T u \Vert \max V u.$$ (Eq. 18)
The two components of $g_0$ correspond to the attention part and vocabulary part respectively. Depending on the strength of each, the value of $g$ may be high or low.
Since there may be multiple answers for a question, we optimize the sum of the probabilities:
$$\textrm {loss} = - \log \Big ( \sum _{a \in A_j} p(a|q_j,c_j) \Big ) $$ (Eq. 19)
Our overall model is displayed in Figure 4 .
We note that KV-MemNN BIBREF6 employs “Title encoding” technique, which uses the prior knowledge that movie titles are often in answers. BIBREF6 showed that this technique substantially improves model performance by over 7% for WikiMovies-WE dataset. In our work, on the other hand, we do not use any data specific feature engineering.
Retrieval Model
Our QA system answers questions by two steps as in Figure 1 . Accurate retrieval of relevant articles is essential for good performance of the comprehension model, and in this section we discuss three approaches for it. We use up to $M$ articles as context. A baseline approach for retrieval is to select articles which contain at least one entity also present in the question. We identify maximal intervals of words that match entities in questions and articles. Capitalization of words is ignored in this step because some words in the questions are not properly capitalized. Out of these (say $N$ ) articles we can randomly select $M$ . We call this approach (r0). For some movie titles, however, this method retrieves too many articles that are actually not related to questions. For example, there is a movie titled “Love Story” which accidentally picks up the words “love story”. This degrades the performance of the comprehension step. Hence, we describe two more retrieval models – (1) a dataset specific hand-crafted approach, and (2) a general learning based approach.
Hand-Crafted Model (r1)
In this approach, the $N$ articles retrieved using entity matching are assigned scores based on certain heuristics. If the movie title matches an entity in the question, the article is given a high score, since it is very likely to be relevant. A similar heuristic was also employed in BIBREF6 . In addition, the number of matching entities is also used to score each article. The top $M$ articles based on these scores are selected for comprehension. This hand-crafted approach already gives strong performance for the WikiMovies dataset, however the heuristic for matching article titles may not be appropriate for other QA tasks. Hence we also study a general learning based approach for retrieval.
Learning Model (R2)
The learning model for retrieval is trained by an oracle constructed using distant supervision. Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question. For example, for x_to_movie question type, the answer movie articles are the correct articles to be retrieved. On the other hand, for questions in movie_to_x type, the movie in the question should be retrieved. Having collected the labels, we train a retrieval model for classifying a question and article pair as relevant or not relevant.
Figure 5 gives an overview of the model, which uses a Word Level Attention (WLA) mechanism. First, the question and article are embedded into vector sequences, using the same method as the comprehension model. We do not use anonymization here, to retain simplicity. Otherwise, the anonymization procedure would have to be repeated several times for a potentially large collection of documents. These vector sequences are next fed to a Bi-GRU, to produce the outputs $v$ (for the question) and $H_c$ (for the document) similar to the previous section.
To classify the article as relevant or not, we introduce a novel attention mechanism to compute the score,
$$s = \sum _{i} ((w \tilde{v} + b)^T \tilde{h}_{c,i})^4$$ (Eq. 25)
Each term in the sum above corresponds to the match between the query representation and a token in the context. This is passed through a 4-th order non-linearity so that relevant tokens are emphasized more. Next, we compute the probability that the article is relevant using a sigmoid:
$$o = \sigma (w^{\prime } s + b^{\prime })$$ (Eq. 27)
In the above, $\tilde{x}$ is the normalized version (by L2-norm) of vector $x$ , $w, b, w^{\prime }, b^{\prime }$ are scalar learnable parameters to control scales.
Experiments
We evaluate the comprehension model on both WikiMovies-FL and WikiMovies-WE datasets. The performance is evaluated using the accuracy of the top hit (single answer) over all possible answers (all entities). This is called hits@1 metric.
For the comprehension model, we use embedding dimension 100, and GRU dimension 128. We use up to $M=10$ retrieved articles as context. The order of the articles are randomly shuffled for each training instance to prevent over-fitting. The size of the anonymized entity set $n_e$ is 600, since in most of the cases, number of entities in a question and context pair is less than 600.
For training the comprehension model, the Adam BIBREF16 optimization rule is used with batch size 32. We stop the optimization based on dev-set performance, and training takes around 10 epochs. For WikiMovies-FL (resp. WikiMovies-WE) dataset, each epoch took approximately 4 (resp. 2) hours on an Nvidia GTX1080 GPU.
For training the retrieval model R2, we use a binary cross entropy objective. Since most articles are not relevant to a question, the ration of positive and negative samples is tuned to $1:10$ . Each epoch for training the retrieval model takes about 40 minutes on an Nvidia GTX1080 GPU.
Performance of Retrieval Models
We evaluate the retrieval models based on precision and recall of the oracle articles. The evaluation is done on the test set. R@k is the ratio of cases where the highest ranked oracle article is in the top k retrieved articles. P@k is the ratio of oracle articles which are in the top k retrieved results. These numbers are summarized in Table 3 . We can see that both (r1) and (R2) significantly outperform (r0), with (R2) doing slightly better. We emphasize that (R2) uses no domain specific knowledge, and can be readily applied to other datasets where articles may not be about specific types of entities.
We have also tested simpler models based on inner product of question and article vectors. In these models, a question $q_j$ and article $d_k$ are converted to vectors $\Phi (q_j), \Psi (d_k)$ , and the relevance score is given by their inner product:
$${\rm score}(j,k) = \Phi (q_j)^T \Psi (d_k).$$ (Eq. 32)
In the view of computation, those models are attractive because we can compute the article vectors offline, and do not need to compute the attention over words in the article. Maximum Inner Product Search algorithms may also be utilized here BIBREF17 , BIBREF18 . However, as shown in upper block of Table 4 , those models perform much worse in terms of scoring. The “Sum of Hidden State” and “Query Free Attention” models are similar to WLA model, using BiGRUs for question and article. In both of those models, $\Phi (q)$ is defined the same way as WLA model, Eq ( 13 ). For the “Sum of Hidden States” model, $\Psi (d)$ is given by the sum of BiGRU hidden states. This is the same as the proposed model by replacing the fourth order of WLA to one. For the “Query Free Attention” model, $\Psi (d)$ is given by the sum of BiGRU hidden states.
We compare our model and several ablations with the KV-MemNN model. Table 5 shows the average performance across three evaluations. The (V) “Vocabulary Model” and (A) “Attention Model” are simplified versions of the full (AV) “Attention and Vocabulary Model”, using only $p_{vocab}$ and $p_{att}$ , respectively. Using a mixture of $p_{att}$ and $p_{vocab}$ gives the best performance.
Interestingly, for WE dataset the Attention model works better. For FL dataset, on the other hand, it is often impossible to select answer from the context, and hence the Vocab model works better.
The number of entities in the full vocabulary is 71K, and some of these are rare. Our intuition to use the Vocab model was to only use it for common entities, and hence we next constructed a smaller vocabulary consisting of all entities which appear at least 10 times in the corpus. This results in a subset vocabulary $\mathcal {V}_S$ of 2400 entities. Using this vocabulary in the mixture model (AsV) further improves the performance.
Table 5 also shows a comparison between (r0), (r1), and (R2) in terms of the overall task performance. We can see that improving the quality of retrieved articles benefits the downstream comprehension performance. In line with the results of the previous section, (r1) and (R2) significantly outperform (r0). Among (r1) and (R2), (R2) performs slightly better.
Benefit of training methods
Table 6 shows the impact of anonymization of entities and shuffling of training articles before the comprehension step, described in Section "Comprehension Model" .
Shuffling the context article before concatenating them, works as a data augmentation technique. Entity anonymization helps because without it each entity has one embedding. Since most of the entities appear only a few times in the articles, these embeddings may not be properly trained. Instead, the anonymous embedding vectors are trained to distinguish different entities. This technique is motivated by a similar procedure used in the construction of CNN / Daily Mail BIBREF8 , and discussed in detail in BIBREF19 .
Visualization
Figure 6 shows a test example from the WikiMovies-FL test data. In this case, even though the answers “Hindi” and “English” are not in the context, they are correctly estimated from $p_{vocab}$ . Note the high value of $g$ in this case. Figure 7 shows another example of how the mixture model works. Here the the answer is successfully selected from the document instead of the vocabulary. Note the low value of $g$ in this case.
Performance in each category
Table 7 shows the comparison for each category of questions between our model and KV-MemNN for the WikiMovies-WE dataset . We can see that performance improvements in the movie_to_x category is relatively large. The KV-MemNN model has a dataset specific “Title encoding” feature which helps the model x_to_movie question types. However without this feature performance in other categories is poor.
Analysis of the mixture gate
The benefit of the mixture model comes from the fact that $p_{pointer}$ works well for some question types, while $p_{vocab}$ works well for others. Table 8 shows how often for each category $p_{vocab}$ is used ( $g > 0.5$ ) in AsV model. For question types “Movie to Language” and “Movie to Genre” (the so called “choice questions”) the number of possible answers is small. For this case, even if the answer can be found in the context, it is easier for the model to select answer from an external vocabulary which encodes global statistics about the entities. For other “free questions”, depending on the question type, one approach is better than the other. Our model is able to successfully estimate the latent category and switch the model type by controlling the coefficient $g$ .
Related Work
hierarchical:16 solve the QA problem by selecting a sentence in the document. They show that joint training of selection and comprehension slightly improves the performance. In our case, joint training is much harder because of the large number of movie articles. Hence we introduce a two-step retrieval and comprehension approach.
Recently architecture:16 proposed a framework to use the performance on a downstream task (e.g. comprehension) as a signal to guide the learning of neural network which determines the input to the downstream task (e.g. retrieval). This motivates us to introduce neural network based approach for both retrieval and comprehension, since in this case the retrieval step can be directly trained to maximize the downstream performance.
In the context of language modeling, the idea of combining of two output probabilities is given in BIBREF20 , however, our equation to compute the mixture coefficient is slightly different. More recently, ahn2016neural used a mixture model to predict the next word from either the entire vocabulary, or a set of Knowledge Base facts associated with the text. In this work, we present the first application of such a mixture model to reading comprehension.
Conclusion and Future Work
We have developed QA system using a two-step retrieval and comprehension approach. The comprehension step uses a mixture model to achieve state of the art performance on WikiMovies dataset, improving previous work by a significant margin.
We would like to emphasize that our approach has minimal heuristics and does not use dataset specific feature engineering. Efficient retrieval while maintaining representation variation is a challenging problem. While there has been a lot of research on comprehension, little focus has been given to designing neural network based retrieval models. We present a simple such model, and emphasize the importance of this direction of research. | Using the answer labels in the training set, we can find appropriate articles that include the information requested in the question. |
12391aab31c899bac0ecd7238c111cb73723a6b7 | 12391aab31c899bac0ecd7238c111cb73723a6b7_0 | Q: Which algorithm is used in the UDS-DFKI system?
Text: Introduction
The shared tasks organized annually at WMT provide important benchmarks used in the MT community. Most of these shared tasks include English data, which contributes to make English the most resource-rich language in MT and NLP. In the most popular WMT shared task for example, the News task, MT systems have been trained to translate texts from and to English BIBREF0, BIBREF1.
This year, we have observed a shift on the dominant role that English on the WMT shared tasks. The News task featured for the first time two language pairs which did not include English: German-Czech and French-German. In addition to that, the Similar Language Translation was organized for the first time at WMT 2019 with the purpose of evaluating the performance of MT systems on three pairs of similar languages from three different language families: Ibero-Romance, Indo-Aryan, and Slavic.
The Similar Language Translation BIBREF2 task provided participants with training, development, and testing data from the following language pairs: Spanish - Portuguese (Romance languages), Czech - Polish (Slavic languages), and Hindi - Nepali (Indo-Aryan languages). Participant could submit system outputs to any of the three language pairs in any direction. The shared task attracted a good number of participants and the performance of all entries was evaluated using popular MT automatic evaluation metrics, namely BLEU BIBREF3 and TER BIBREF4.
In this paper we describe the UDS-DFKI system to the WMT 2019 Similar Language Translation task. The system achieved competitive performance and ranked second among ten entries in Czech to Polish translation in terms of BLEU score.
Related Work
With the widespread use of MT technology and the commercial and academic success of NMT, there has been more interest in training systems to translate between languages other than English BIBREF5. One reason for this is the growing need of direct translation between pairs of similar languages, and to a lesser extent language varieties, without the use of English as a pivot language. The main challenge is to overcome the limitation of available parallel data taking advantage of the similarity between languages. Studies have been published on translating between similar languages (e.g. Catalan - Spanish BIBREF5) and language varieties such as European and Brazilian Portuguese BIBREF6, BIBREF7. The study by lakew2018neural tackles both training MT systems to translate between European–Brazilian Portuguese and European–Canadian French, and two pairs of similar languages Croatian–Serbian and Indonesian–Malay.
Processing similar languages and language varieties has attracted attention not only in the MT community but in NLP in general. This is evidenced by a number of research papers published in the last few years and the recent iterations of the VarDial evaluation campaign which featured multiple shared tasks on topics such as dialect detection, morphosyntactic tagging, cross-lingual parsing, cross-lingual morphological analysis BIBREF8, BIBREF9.
Data
We used the Czech–Polish dataset provided by the WMT 2019 Similar Language Translation task organizers for our experiments. The released parallel dataset consists of out-of-domain (or general-domain) data only and it differs substantially from the released development set which is part of a TED corpus. The parallel data includes Europarl v9, Wiki-titles v1, and JRC-Acquis. We combine all the released data and prepare a large out-domain dataset.
Data ::: Pre-processing
The out-domain data is noisy for our purposes, so we apply methods for cleaning. We performed the following two steps: (i) we use the cleaning process described in Pal:2015:WMT, and (ii) we execute the Moses BIBREF10 corpus cleaning scripts with minimum and maximum number of tokens set to 1 and 100, respectively. After cleaning, we perform punctuation normalization, and then we use the Moses tokenizer to tokenize the out-domain corpus with `no-escape' option. Finally, we apply true-casing.
The cleaned version of the released data, i.e., the General corpus containing 1,394,319 sentences, is sorted based on the score in Equation DISPLAY_FORM2. Thereafter, We split the entire data (1,394,319) into two sets; we use the first 1,000 for validation and the remaining as training data. The released development set (Dev) is used as test data for our experiment. It should be noted noted that, we exclude 1,000 sentences from the General corpus which are scored as top (i.e., more in-domain like) during the data selection process.
We prepare two parallel training sets from the aforementioned training data: (i) transference500K(presented next), collected 500,000 parallel data through data selection method BIBREF11, which are very similar to the in-domain data (for our case the development set), and (ii) transferenceALL, utilizing all the released out-domain data sorted by Equation DISPLAY_FORM2.
The transference500Ktraining set is prepared using in-domain (development set) bilingual cross-entropy difference for data selection as was described in Axelrod:2011. The difference in cross-entropy is computed based on two language models (LM): a domain-specific LM is estimated from the in-domain (containing 2050 sentences) corpus ($lm_{i}$) and the out-domain LM ($lm_{o}$) is estimated from the eScape corpus. We rank the eScape corpus by assigning a score to each of the individual sentences which is the sum of the three cross-entropy ($H$) differences. For a $j^{th}$ sentence pair ${src}_j$–${trg}_j$, the score is calculated based on Equation DISPLAY_FORM2.
System Architecture - The Transference Model
Our transference model extends the original transformer model to multi-encoder based transformer architecture. The transformer architecture BIBREF12 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. We use multi-head attention to jointly attend to information at different positions from different representation subspaces.
The first encoder ($enc_1$) of our model encodes word form information of the source ($f_w$), and a second sub-encoder ($enc_2$) to encode sub-word (byte-pair-encoding) information of the source ($f_s$). Additionally, a second encoder ($enc_{src \rightarrow mt}$) which takes the encoded representation from the $enc_1$, combines this with the self-attention-based encoding of $f_s$ ($enc_2$), and prepares a representation for the decoder ($dec_{e}$) via cross-attention. Our second encoder ($enc_{1 \rightarrow 2}$) can be viewed as a transformer based NMT's decoding block, however, without masking. The intuition behind our architecture is to generate better representations via both self- and cross-attention and to further facilitate the learning capacity of the feed-forward layer in the decoder block. In our transference model, one self-attended encoder for $f_w$, $\mathbf {f_w}$ = $(w_1, w_2, \ldots , w_k)$, returns a sequence of continuous representations, $enc_{2}$, and a second self-attended sub-encoder for $f_s$, $\mathbf {f_s}$ = $(s_1, s_2, \ldots , s_l)$, returns another sequence of continuous representations, $enc_{2}$. Self-attention at this point provides the advantage of aggregating information from all of the words, including $f_w$ and $f_s$, and successively generates a new representation per word informed by the entire $f_w$ and $f_s$ context. The internal $enc_{2}$ representation performs cross-attention over $enc_{1}$ and prepares a final representation ($enc_{1 \rightarrow 2}$) for the decoder ($dec_{e}$). The decoder generates the $e$ output in sequence, $\mathbf {e}$ = $(e_1, e_2, \ldots , e_n)$, one word at a time from left to right by attending to previously generated words as well as the final representations ($enc_{1 \rightarrow 2}$) generated by the encoder.
We use the scale-dot attention mechanism (like Vaswani:NIPS2017) for both self- and cross-attention, as defined in Equation DISPLAY_FORM3, where $Q$, $K$ and $V$ are query, key and value, respectively, and $d_k$ is the dimension of $K$.
The multi-head attention mechanism in the transformer network maps the Q, K, and V matrices by using different linear projections. Then $h$ parallel heads are employed to focus on different parts in V. The $i^{th}$ multi-head attention is denoted by $head_i$ in Equation DISPLAY_FORM4. $head_i$ is linearly learned by three projection parameter matrices: $W_i^Q,W_i^K \in R^{d_{model} \times d_k}$, $W_i^V \in R^{d_{model} \times d_v}$; where $d_k = d_v = d_{model}/h$, and $d_{model}$ is the number of hidden units of our network.
Finally, all the vectors produced by parallel heads are linearly projected using concatenation and form a single vector, called a multi-head attention ($M_{att}$) (cf. Equation DISPLAY_FORM5). Here the dimension of the learned weight matrix $W^O$ is $R^{d_{model} \times d_{model}}$.
Experiments
We explore our transference model –a two-encoder based transformer architecture, in CS-PL similar language translation task.
Experiments ::: Experiment Setup
For transferenceALL, we initially train on the complete out-of-domain dataset (General). The General data is sorted based on their in-domain similarities as described in Equation DISPLAY_FORM2.
transferenceALLmodels are then fine-tuned towards the 500K (in-domain-like) data. Finally, we perform checkpoint averaging using the 8 best checkpoints. We report the results on the provided development set, which we use as a test set before the submission. Additionally we also report the official test set result.
To handle out-of-vocabulary words and to reduce the vocabulary size, instead of considering words, we consider subword units BIBREF13 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the Czech (CS) and Polish (PL), we define BPE tokens by jointly processing all parallel data. Thus, CS and PL derive a single BPE vocabulary. Since CS and PL belong to the similar language, they naturally share a good fraction of BPE tokens, which reduces the vocabulary size.
We pass word level information on the first encoder and the BPE information to the second one. On the decoder side of the transference model we pass only BPE text.
We evaluate our approach with development data which is used as test case before submission. We use BLEU BIBREF3 and TER BIBREF4.
Experiments ::: Hyper-parameter Setup
We follow a similar hyper-parameter setup for all reported systems. All encoders, and the decoder, are composed of a stack of $N_{fw} = N_{fs} = N_{es} = 6$ identical layers followed by layer normalization. Each layer again consists of two sub-layers and a residual connection BIBREF14 around each of the two sub-layers. We apply dropout BIBREF15 to the output of each sub-layer, before it is added to the sub-layer input and normalized. Furthermore, dropout is applied to the sums of the word embeddings and the corresponding positional encodings in both encoders as well as the decoder stacks.
We set all dropout values in the network to 0.1. During training, we employ label smoothing with value $\epsilon _{ls}$ = 0.1. The output dimension produced by all sub-layers and embedding layers is $d_{model} = 512$. Each encoder and decoder layer contains a fully connected feed-forward network ($FFN$) having dimensionality of $d_{model} = 512$ for the input and output and dimensionality of $d_{ff} = 2048$ for the inner layers. For the scaled dot-product attention, the input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. As multi-head attention parameters, we employ $h = 8$ for parallel attention layers, or heads. For each of these we use a dimensionality of $d_k = d_v = d_{model}/h = 64$. For optimization, we use the Adam optimizer BIBREF16 with $\beta _1 = 0.9$, $\beta _2 = 0.98$ and $\epsilon = 10^{-9}$.
The learning rate is varied throughout the training process, and increasing for the first training steps $warmup_{steps} = 8000$ and afterwards decreasing as described in BIBREF12. All remaining hyper-parameters are set analogously to those of the transformer's base model. At training time, the batch size is set to 25K tokens, with a maximum sentence length of 256 subwords, and a vocabulary size of 28K. After each epoch, the training data is shuffled. After finishing training, we save the 8 best checkpoints which are written at each epoch. Finally, we use a single model obtained by averaging the last 8 checkpoints. During decoding, we perform beam search with a beam size of 4. We use shared embeddings between CS and PL in all our experiments.
Results
We present the results obtained by our system in Table TABREF8.
Our fine-tuned system on development set provides significant performance improvement over the generic model. We found +12.9 absolute BLEU points improvement over the generic model. Similar improvement is also observed in terms of TER (-16.9 absolute). It is to be noted that our generic model is trained solely on the clean version of training data.
Before submission, we performed punctuation normalization, unicode normalization, and detokenization for the run.
In Table TABREF9 we present the ranking of the competition provided by the shared task organizers. Ten entries were submitted by five teams and are ordered by BLEU score. TER is reported for all submissions which achieved BLEU score greater than 5.0. The type column specifies the type of system, whether it is a Primary (P) or Constrastive (C) entry.
Our system was ranked second in the competition only 0.3 BLEU points behind the winning team UPC-TALP. The relative low BLEU and high TER scores obtained by all teams are due to out-of-domain data provided in the competition which made the task equally challenging to all participants.
Conclusion
This paper presented the UDS-DFKI system submitted to the Similar Language Translation shared task at WMT 2019. We presented the results obtained by our system in translating from Czech to Polish. Our system achieved competitive performance ranking second among ten teams in the competition in terms of BLEU score. The fact that out-of-domain data was provided by the organizers resulted in a challenging but interesting scenario for all participants.
In future work, we would like to investigate how effective is the proposed hypothesis (i.e., word-BPE level information) in similar language translation. Furthermore, we would like to explore the similarity between these two languages (and the other two language pairs in the competition) in more detail by training models that can best capture morphological differences between them. During such competitions, this is not always possible due to time constraints.
Acknowledgments
This research was funded in part by the German research foundation (DFG) under grant number GE 2819/2-1 (project MMPE) and the German Federal Ministry of Education and Research (BMBF) under funding code 01IW17001 (project Deeplee). The responsibility for this publication lies with the authors. We would like to thank the anonymous WMT reviewers for their valuable input, and the organizers of the shared task. | Our transference model extends the original transformer model to multi-encoder based transformer architecture. The transformer architecture BIBREF12 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. |
8b43201e7e648c670c02e16ba189230820879228 | 8b43201e7e648c670c02e16ba189230820879228_0 | Q: Does the use of out-of-domain data improve the performance of the method?
Text: Introduction
The shared tasks organized annually at WMT provide important benchmarks used in the MT community. Most of these shared tasks include English data, which contributes to make English the most resource-rich language in MT and NLP. In the most popular WMT shared task for example, the News task, MT systems have been trained to translate texts from and to English BIBREF0, BIBREF1.
This year, we have observed a shift on the dominant role that English on the WMT shared tasks. The News task featured for the first time two language pairs which did not include English: German-Czech and French-German. In addition to that, the Similar Language Translation was organized for the first time at WMT 2019 with the purpose of evaluating the performance of MT systems on three pairs of similar languages from three different language families: Ibero-Romance, Indo-Aryan, and Slavic.
The Similar Language Translation BIBREF2 task provided participants with training, development, and testing data from the following language pairs: Spanish - Portuguese (Romance languages), Czech - Polish (Slavic languages), and Hindi - Nepali (Indo-Aryan languages). Participant could submit system outputs to any of the three language pairs in any direction. The shared task attracted a good number of participants and the performance of all entries was evaluated using popular MT automatic evaluation metrics, namely BLEU BIBREF3 and TER BIBREF4.
In this paper we describe the UDS-DFKI system to the WMT 2019 Similar Language Translation task. The system achieved competitive performance and ranked second among ten entries in Czech to Polish translation in terms of BLEU score.
Related Work
With the widespread use of MT technology and the commercial and academic success of NMT, there has been more interest in training systems to translate between languages other than English BIBREF5. One reason for this is the growing need of direct translation between pairs of similar languages, and to a lesser extent language varieties, without the use of English as a pivot language. The main challenge is to overcome the limitation of available parallel data taking advantage of the similarity between languages. Studies have been published on translating between similar languages (e.g. Catalan - Spanish BIBREF5) and language varieties such as European and Brazilian Portuguese BIBREF6, BIBREF7. The study by lakew2018neural tackles both training MT systems to translate between European–Brazilian Portuguese and European–Canadian French, and two pairs of similar languages Croatian–Serbian and Indonesian–Malay.
Processing similar languages and language varieties has attracted attention not only in the MT community but in NLP in general. This is evidenced by a number of research papers published in the last few years and the recent iterations of the VarDial evaluation campaign which featured multiple shared tasks on topics such as dialect detection, morphosyntactic tagging, cross-lingual parsing, cross-lingual morphological analysis BIBREF8, BIBREF9.
Data
We used the Czech–Polish dataset provided by the WMT 2019 Similar Language Translation task organizers for our experiments. The released parallel dataset consists of out-of-domain (or general-domain) data only and it differs substantially from the released development set which is part of a TED corpus. The parallel data includes Europarl v9, Wiki-titles v1, and JRC-Acquis. We combine all the released data and prepare a large out-domain dataset.
Data ::: Pre-processing
The out-domain data is noisy for our purposes, so we apply methods for cleaning. We performed the following two steps: (i) we use the cleaning process described in Pal:2015:WMT, and (ii) we execute the Moses BIBREF10 corpus cleaning scripts with minimum and maximum number of tokens set to 1 and 100, respectively. After cleaning, we perform punctuation normalization, and then we use the Moses tokenizer to tokenize the out-domain corpus with `no-escape' option. Finally, we apply true-casing.
The cleaned version of the released data, i.e., the General corpus containing 1,394,319 sentences, is sorted based on the score in Equation DISPLAY_FORM2. Thereafter, We split the entire data (1,394,319) into two sets; we use the first 1,000 for validation and the remaining as training data. The released development set (Dev) is used as test data for our experiment. It should be noted noted that, we exclude 1,000 sentences from the General corpus which are scored as top (i.e., more in-domain like) during the data selection process.
We prepare two parallel training sets from the aforementioned training data: (i) transference500K(presented next), collected 500,000 parallel data through data selection method BIBREF11, which are very similar to the in-domain data (for our case the development set), and (ii) transferenceALL, utilizing all the released out-domain data sorted by Equation DISPLAY_FORM2.
The transference500Ktraining set is prepared using in-domain (development set) bilingual cross-entropy difference for data selection as was described in Axelrod:2011. The difference in cross-entropy is computed based on two language models (LM): a domain-specific LM is estimated from the in-domain (containing 2050 sentences) corpus ($lm_{i}$) and the out-domain LM ($lm_{o}$) is estimated from the eScape corpus. We rank the eScape corpus by assigning a score to each of the individual sentences which is the sum of the three cross-entropy ($H$) differences. For a $j^{th}$ sentence pair ${src}_j$–${trg}_j$, the score is calculated based on Equation DISPLAY_FORM2.
System Architecture - The Transference Model
Our transference model extends the original transformer model to multi-encoder based transformer architecture. The transformer architecture BIBREF12 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. We use multi-head attention to jointly attend to information at different positions from different representation subspaces.
The first encoder ($enc_1$) of our model encodes word form information of the source ($f_w$), and a second sub-encoder ($enc_2$) to encode sub-word (byte-pair-encoding) information of the source ($f_s$). Additionally, a second encoder ($enc_{src \rightarrow mt}$) which takes the encoded representation from the $enc_1$, combines this with the self-attention-based encoding of $f_s$ ($enc_2$), and prepares a representation for the decoder ($dec_{e}$) via cross-attention. Our second encoder ($enc_{1 \rightarrow 2}$) can be viewed as a transformer based NMT's decoding block, however, without masking. The intuition behind our architecture is to generate better representations via both self- and cross-attention and to further facilitate the learning capacity of the feed-forward layer in the decoder block. In our transference model, one self-attended encoder for $f_w$, $\mathbf {f_w}$ = $(w_1, w_2, \ldots , w_k)$, returns a sequence of continuous representations, $enc_{2}$, and a second self-attended sub-encoder for $f_s$, $\mathbf {f_s}$ = $(s_1, s_2, \ldots , s_l)$, returns another sequence of continuous representations, $enc_{2}$. Self-attention at this point provides the advantage of aggregating information from all of the words, including $f_w$ and $f_s$, and successively generates a new representation per word informed by the entire $f_w$ and $f_s$ context. The internal $enc_{2}$ representation performs cross-attention over $enc_{1}$ and prepares a final representation ($enc_{1 \rightarrow 2}$) for the decoder ($dec_{e}$). The decoder generates the $e$ output in sequence, $\mathbf {e}$ = $(e_1, e_2, \ldots , e_n)$, one word at a time from left to right by attending to previously generated words as well as the final representations ($enc_{1 \rightarrow 2}$) generated by the encoder.
We use the scale-dot attention mechanism (like Vaswani:NIPS2017) for both self- and cross-attention, as defined in Equation DISPLAY_FORM3, where $Q$, $K$ and $V$ are query, key and value, respectively, and $d_k$ is the dimension of $K$.
The multi-head attention mechanism in the transformer network maps the Q, K, and V matrices by using different linear projections. Then $h$ parallel heads are employed to focus on different parts in V. The $i^{th}$ multi-head attention is denoted by $head_i$ in Equation DISPLAY_FORM4. $head_i$ is linearly learned by three projection parameter matrices: $W_i^Q,W_i^K \in R^{d_{model} \times d_k}$, $W_i^V \in R^{d_{model} \times d_v}$; where $d_k = d_v = d_{model}/h$, and $d_{model}$ is the number of hidden units of our network.
Finally, all the vectors produced by parallel heads are linearly projected using concatenation and form a single vector, called a multi-head attention ($M_{att}$) (cf. Equation DISPLAY_FORM5). Here the dimension of the learned weight matrix $W^O$ is $R^{d_{model} \times d_{model}}$.
Experiments
We explore our transference model –a two-encoder based transformer architecture, in CS-PL similar language translation task.
Experiments ::: Experiment Setup
For transferenceALL, we initially train on the complete out-of-domain dataset (General). The General data is sorted based on their in-domain similarities as described in Equation DISPLAY_FORM2.
transferenceALLmodels are then fine-tuned towards the 500K (in-domain-like) data. Finally, we perform checkpoint averaging using the 8 best checkpoints. We report the results on the provided development set, which we use as a test set before the submission. Additionally we also report the official test set result.
To handle out-of-vocabulary words and to reduce the vocabulary size, instead of considering words, we consider subword units BIBREF13 by using byte-pair encoding (BPE). In the preprocessing step, instead of learning an explicit mapping between BPEs in the Czech (CS) and Polish (PL), we define BPE tokens by jointly processing all parallel data. Thus, CS and PL derive a single BPE vocabulary. Since CS and PL belong to the similar language, they naturally share a good fraction of BPE tokens, which reduces the vocabulary size.
We pass word level information on the first encoder and the BPE information to the second one. On the decoder side of the transference model we pass only BPE text.
We evaluate our approach with development data which is used as test case before submission. We use BLEU BIBREF3 and TER BIBREF4.
Experiments ::: Hyper-parameter Setup
We follow a similar hyper-parameter setup for all reported systems. All encoders, and the decoder, are composed of a stack of $N_{fw} = N_{fs} = N_{es} = 6$ identical layers followed by layer normalization. Each layer again consists of two sub-layers and a residual connection BIBREF14 around each of the two sub-layers. We apply dropout BIBREF15 to the output of each sub-layer, before it is added to the sub-layer input and normalized. Furthermore, dropout is applied to the sums of the word embeddings and the corresponding positional encodings in both encoders as well as the decoder stacks.
We set all dropout values in the network to 0.1. During training, we employ label smoothing with value $\epsilon _{ls}$ = 0.1. The output dimension produced by all sub-layers and embedding layers is $d_{model} = 512$. Each encoder and decoder layer contains a fully connected feed-forward network ($FFN$) having dimensionality of $d_{model} = 512$ for the input and output and dimensionality of $d_{ff} = 2048$ for the inner layers. For the scaled dot-product attention, the input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. As multi-head attention parameters, we employ $h = 8$ for parallel attention layers, or heads. For each of these we use a dimensionality of $d_k = d_v = d_{model}/h = 64$. For optimization, we use the Adam optimizer BIBREF16 with $\beta _1 = 0.9$, $\beta _2 = 0.98$ and $\epsilon = 10^{-9}$.
The learning rate is varied throughout the training process, and increasing for the first training steps $warmup_{steps} = 8000$ and afterwards decreasing as described in BIBREF12. All remaining hyper-parameters are set analogously to those of the transformer's base model. At training time, the batch size is set to 25K tokens, with a maximum sentence length of 256 subwords, and a vocabulary size of 28K. After each epoch, the training data is shuffled. After finishing training, we save the 8 best checkpoints which are written at each epoch. Finally, we use a single model obtained by averaging the last 8 checkpoints. During decoding, we perform beam search with a beam size of 4. We use shared embeddings between CS and PL in all our experiments.
Results
We present the results obtained by our system in Table TABREF8.
Our fine-tuned system on development set provides significant performance improvement over the generic model. We found +12.9 absolute BLEU points improvement over the generic model. Similar improvement is also observed in terms of TER (-16.9 absolute). It is to be noted that our generic model is trained solely on the clean version of training data.
Before submission, we performed punctuation normalization, unicode normalization, and detokenization for the run.
In Table TABREF9 we present the ranking of the competition provided by the shared task organizers. Ten entries were submitted by five teams and are ordered by BLEU score. TER is reported for all submissions which achieved BLEU score greater than 5.0. The type column specifies the type of system, whether it is a Primary (P) or Constrastive (C) entry.
Our system was ranked second in the competition only 0.3 BLEU points behind the winning team UPC-TALP. The relative low BLEU and high TER scores obtained by all teams are due to out-of-domain data provided in the competition which made the task equally challenging to all participants.
Conclusion
This paper presented the UDS-DFKI system submitted to the Similar Language Translation shared task at WMT 2019. We presented the results obtained by our system in translating from Czech to Polish. Our system achieved competitive performance ranking second among ten teams in the competition in terms of BLEU score. The fact that out-of-domain data was provided by the organizers resulted in a challenging but interesting scenario for all participants.
In future work, we would like to investigate how effective is the proposed hypothesis (i.e., word-BPE level information) in similar language translation. Furthermore, we would like to explore the similarity between these two languages (and the other two language pairs in the competition) in more detail by training models that can best capture morphological differences between them. During such competitions, this is not always possible due to time constraints.
Acknowledgments
This research was funded in part by the German research foundation (DFG) under grant number GE 2819/2-1 (project MMPE) and the German Federal Ministry of Education and Research (BMBF) under funding code 01IW17001 (project Deeplee). The responsibility for this publication lies with the authors. We would like to thank the anonymous WMT reviewers for their valuable input, and the organizers of the shared task. | No |
5d5a571ff04a5fdd656ca87f6525a60e917d6558 | 5d5a571ff04a5fdd656ca87f6525a60e917d6558_0 | Q: Do they impose any grammatical constraints over the generated output?
Text: Introduction
Traditional Chinese Medicine (TCM) is one of the most important forms of medical treatment in China and the surrounding areas. TCM has accumulated large quantities of documentation and therapy records in the long history of development. Prescriptions consisting of herbal medication are the most important form of TCM treatment. TCM practitioners prescribe according to a patient's symptoms that are observed and analyzed by the practitioners themselves instead of using medical equipment, e.g., the CT. The patient takes the decoction made out of the herbal medication in the prescription. A complete prescription includes the composition of herbs, the proportion of herbs, the preparation method and the doses of the decoction. In this work, we focus on the composition part of the prescription, which is the most essential part of the prescription.
During the long history of TCM, there has been a number of therapy records or treatment guidelines in the TCM classics composed by outstanding TCM researchers and practitioners. In real life, TCM practitioners often take these classical records for reference when prescribing for the patient, which inspires us to design a model that can automatically generate prescriptions by learning from these classics. It also needs to be noted that due to the issues in actual practice, the objective of this work is to generate candidate prescriptions to facilitate the prescribing procedure instead of substituting the human practitioners completely. An example of TCM prescription is shown in Table 1 . The herbs in the prescription are organized in a weak order. By “weak order”, we mean that the effect of the herbs are not influenced by the order. However, the order of the herbs reflects the way of thinking when constructing the prescription. Therefore, the herbs are connected to each other, and the most important ones are usually listed first.
Due to the lack of digitalization and formalization, TCM has not attracted sufficient attention in the artificial intelligence community. To facilitate the studies on automatic TCM prescription generation, we collect and clean a large number of prescriptions as well as their corresponding symptom descriptions from the Internet.
Inspired by the great success of natural language generation tasks like neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 , abstractive summarization BIBREF3 , generative question answering BIBREF4 , and neural dialogue response generation BIBREF5 , BIBREF6 , we propose to adopt the end-to-end paradigm, mainly the sequence to sequence model, to tackle the task of generating TCM prescriptions based on textual symptom descriptions.
The sequence to sequence model (seq2seq) consists of an encoder that encodes the input sequence and a decoder that generates the output sequence. The success in the language generation tasks indicates that the seq2seq model can learn the semantic relation between the output sequence and the input sequence quite well. It is also a desirable characteristic for generating prescriptions according to the textual symptom description.
The prescription generation task is similar to the generative question answering (QA). In such task settings, the encoder part of the model takes in the question, and encodes the sequence of tokens into a set of hidden states, which embody the information of the question. The decoder part then iteratively generates tokens based on the information encoded in the hidden states of the encoder. The model would learn how to generate response after training on the corresponding question-answer pairs.
In the TCM prescription generation task, the textual symptom descriptions can be seen as the question and the aim of the task is to produce a set of TCM herbs that form a prescription as the answer to the question. However, the set of herbs is different from the textual answers to a question in the QA task. A difference that is most evident is that there will not be any duplication of herbs in the prescription. However, the basic seq2seq model sometimes produces the same herb tokens repeatedly when applied to the TCM prescription generation task. This phenomenon can hurt the performance of recall rate even after applying a post-process to eliminate repetitions. Because in a limited length of the prescription , the model would produce the same token over and over again, rather than real and novel ones. Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order. In this paper, we explore to automatically generate TCM prescriptions based on textual symptoms. We propose a soft seq2seq model with coverage mechanism and a novel soft loss function. The coverage mechanism is designed to make the model aware of the herbs that have already been generated while the soft loss function is to relieve the side effect of strict order assumption. In the experiment results, our proposed model beats all the baselines in professional evaluations, and we observe a large increase in both the recall rate and the F1 score compared with the basic seq2seq model.
The main contributions of this paper lie in the following three folds:
Related Work
There has not been much work concerning computational TCM. zhou2010development attempted to build a TCM clinical data warehouse so that the TCM knowledge can be analyzed and used. This is a typical way of collecting data, since the number of prescriptions given by the practitioners in the clinics is very large. However, in reality, most of the TCM doctors do not refer to the constructed digital systems, because the quality of the input data tends to be poor. Therefore, we choose prescriptions in the classics (books or documentation) of TCM. Although the available data can be fewer than the clinical data, it guarantees the quality of the prescriptions.
wang2004self attempted to construct a self-learning expert system with several simple classifiers to facilitate the TCM diagnosis procedure, Wang2013TCM proposed to use shallow neural networks and CRF based multi-labeling learning methods to model TCM inquiry process, but they only considered the disease of chronic gastritis and its taxonomy is very simple. These methods either utilize traditional data mining methods or are highly involved with expert crafted systems. Zhang2011Topic,Zhu2017TCM proposed to use LDA to model the herbs. li2017distributed proposed to learn the distributed embedding for TCM herbs with recurrent neural networks.
Methodology
Neural sequence to sequence model has proven to be very effective in a wide range of natural language generation tasks, including neural machine translation and abstractive text summarization. In this section, we first describe the definition of the TCM prescription generation task. Then, we introduce how to apply seq2seq model in the prescription composition task. Next, we show how to guide the model to generate more fruitful herbs in the setting of this task by introducing coverage mechanism. Finally, we introduce our novel soft loss function that relieves the strict assumption of order between tokens. An overview of the our final model is shown in Figure 1 .
Task Definition
Given a TCM herbal treatment dataset that consists of $N$ data samples, the $i$ -th data sample ( $x^{(i)}, p^{(i)}$ ) contains one piece of source text $x^{(i)}$ that describes the symptoms, and $M_{i}$ TCM herbs $(p_{1}^{i},p_{2}^{i}, ..., p_{M_{i}}^{i})$ that make up the herb prescription $p^{(i)}$ .
We view the symptoms as a sequence of characters $x^{(i)} = (x^{(i)}_{1}, x^{(i)}_{2}, ..., x^{(i)}_{T})$ . We do not segment the characters into words because they are mostly in traditional Chinese that uses characters as basic semantic units. The herbs $p_{1}^{i},p_{2}^{i}, ..., p_{M_{i}}^{i}$ are all different from each other.
Basic Encoder-Decoder Model
Sequence-to-sequence model was first proposed to solve the machine translation problem. The model consists of two parts, an encoder and a decoder. The encoder is bound to take in the source sequence and compress the sequence into a series of hidden states. The decoder is used to generate a sequence of target tokens based on the information embodied in the hidden states given by the encoder. Typically, both the encoder and the decoder are implemented with recurrent neural networks (RNN).
In our TCM prescription generation task, the encoder RNN converts the variable-length symptoms in character sequence $x = (x_{1},x_{2},...,x_{T})$ into a set of hidden representations $h = (h_{1},h_{2},...,h_{T})$ , by iterating the following equations along time $t$ :
$$h_{t} = f(x_{t},h_{t-1})$$ (Eq. 8)
where $f$ is a RNN family function. In our implementation, we choose gated recurrent unit (GRU BIBREF1 ) as $f$ , as the gating mechanism is expected to model long distance dependency better. Furthermore, we choose the bidirectional version of recurrent neural networks as the encoder to solve the problem that the later words get more emphasis in the unidirectional version. We concatenate both the $h_{t}$ in the forward and backward pass and get $\widehat{h_{t}}$ as the final representation of the hidden state at time step $t$ .
We get the context vector $c$ representing the whole source $x$ at the $t$ -th time through a non-linear function $q$ , normally known as the attention mechanism:
$$c_{t} = \sum _{j=1}^{T}\alpha _{tj}h_{j} \\ \alpha _{tj} = \frac{\text{exp}\left( a\left(s_{t-1},h_{j}\right)\right)}{\sum _{k=1}^{T}\text{exp}\left( a\left(s_{t-1},h_{k}\right)\right)}$$ (Eq. 9)
The context vector $c_{t}$ is calculated as a weighted sum of hidden representation produced by the encoder $\textbf {h} = (h_{1},...,h_{T})$ . $a(s_{t-1},h_{j})$ is a soft alignment function that measures the relevance between $s_{t-1}$ and $h_{j}$ . It computes how much $h_j$ is needed for the $t$ -th output word based on the previous hidden state of the decoder $s_{t-1}$ . The decoder is another RNN. It generates a variable-length sequence $y = (y_{1},y_{2}, ..., y_{T^{\prime }})$ token by token (herb), through a conditional language model:
$$s_{t} = f(s_{t-1},c_{t},Ey_{t-1}) \\ p(y_{t}|y_{1,...,t},x) = g(s_{t})$$ (Eq. 10)
where $s_{t}$ is the hidden state of the decoder RNN at time step $t$ . $f$ is also a gated recurrent unit. The non-linear function $g$ is a $softmax$ layer, which outputs the probabilities of all the herbs in the herb vocabulary. $E \in (V\times d)$ is the embedding matrix of the target tokens, $V$ is the number of herb vocabulary, $d$ is the embedding dimension. $y_{t-1}$ is the last predicted token.
In the decoder, the context vector $c_{t}$ is calculated based on the hidden state $s_{t-1}$ of the decoder at time step $t-1$ and all the hidden states in the encoder. The procedure is known as the attention mechanism. The attention mechanism is expected to supplement the information from the source sequence that is more connected to the current hidden state of the decoder instead of only depending on a fixed vector produced by the encoder.
The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence. A soft version of cross entropy loss is applied to maximize the conditional probability, which we will describe in detail.
Coverage Mechanism
Different from natural language generation tasks, there is no duplicate herb in the TCM prescription generation task. When directly applying seq2seq model in this task, the decoder tends to generate some frequently observed herbs over and over again. Although we can prune the repeated herbs through post processing by eliminating the repeated ones, it still hurts the recall performance as the maximum length of a prescription is limited. This situation is still true when we use a $<EOS>$ label to indicate where the generation should stop.
To encourage the decoder to generate more diverse and reasonable herb tokens, we propose to apply coverage mechanism to make the model aware of the already generated herbs. Coverage mechanism BIBREF7 , BIBREF8 , BIBREF9 was first proposed to help the decoder focus on the part that has not been paid much attention by feeding a fertility vector to the attention calculation, indicating how much information of the input is used.
In our model, we do not use the fertility vector to tune the attention weights. The reason is that the symptoms are related to others and altogether describe the whole disease, which is explained in Section "Introduction" . Still, inspired by its motivation, we adapt the coverage mechanism to the decoder where a coverage vector is fed to the GRU cell together with the context vector. Equation 10 is then replaced by the following ones.
$$a_{t} = \tanh (WD_{t}+b) \\ s_{t} = f(s_{t-1}, c_{t}, Ey_{t-1}, a_{t})$$ (Eq. 12)
where $a_{t}$ is the coverage vector at the $t$ -th time step in decoding. $D_{t}$ is the one-hot representation of the generated tokens until the $t$ -th time step. $W\in \mathbb {R}^{V\times H}$ is a learnable parameter matrix, where $V$ is the size of the herb vocabulary and $H$ is the size of the hidden state. By feeding the coverage vector, which is also a sketch of the generated herbs, to the GRU as part of the input, our model can softly switch more probability to the herbs that have not been predicted. This way, the model is encouraged to produce novel herbs rather than repeatedly predicting the frequently observed ones, thus increasing the recall rate.
Soft Loss Function
We argue that even though the order of the herbs matters when generating the prescription BIBREF10 , BIBREF11 , we should not strictly restrict the order. However, the traditional cross entropy loss function applied to the basic seq2seq model puts a strict assumption on the order of the labels. To deal with the task of predicting weakly ordered labels (or even unordered labels), we propose a soft loss function instead of the original hard cross entropy loss function:
$$loss = -\sum _{t}\ q^{\prime }_{t}\ log(p_t)$$ (Eq. 14)
Instead of using the original hard one-hot target probability $q_t$ , we use a soft target probability distribution $q^{\prime }_{t}$ , which is calculated according to $q_t$ and the target sequence $\mathbf {q}$ of this sample. Let $\mathbf {q_v}$ denote the bag of words representation of $\mathbf {q}$ , where only slots of the target herbs in $\mathbf {q}$ are filled with $1s$ . We use a function $\xi $ to project the original target label probability $q_t$ into a new probability distribution $q^{\prime }_{t}$0 .
$$q^{\prime }_t = \xi (q_t, \mathbf {q_v})$$ (Eq. 15)
This function $\xi $ is designed so as to decrease the harsh punishment when the model predicts the labels in the wrong order. In this paper, we apply a simple yet effective projection function as Equation 16 . This is an example implementation, and one can design more sophisticated projection functions if needed.
$$\xi (y_t,\mathbf {s}) = ((\mathbf {q_v}/M) + y_t) / 2 $$ (Eq. 16)
where $M$ is the length of $q$ . This function means that at the $t$ -th time of decoding, for each target herb token $p_i$ , we first split a probability density of $1.0$ equally across all the $l$ herbs into $1/M$ . Then, we take the average of this probability distribution and the original probability $q_t$ to be the final probability distribution at time $t$ .
Dataset Construction
We crawl the data from UTF8gbsnTCM Prescription Knowledge Base (中医方剂知识库) . This knowledge base includes comprehensive TCM documentation in the history. The database includes 710 TCM historic books or documents as well as some modern ones, consisting of 85,166 prescriptions in total. Each item in the database provides the name, the origin, the composition, the effect, the contraindications, and the preparation method. We clean and formalize the database and get 82,044 usable symptom-prescription pairs
In the process of formalization, we temporarily omit the dose information and the preparation method description, as we are mainly concerned with the composition. Because the names of the herbs have evolved a lot, we conclude heuristic rules as well as specific projection rules to project some rarely seen herbs to their similar forms that are normally referred to. There are also prescriptions that refer to the name of other prescriptions. We simply substitute these names with their constituents.
To make the experiment result more robust, we conduct our experiments on two separate test datasets. The first one is a subset of the data described above. We randomly split the whole data into three parts, the training data (90%), the development data (5%) and the test data (5%). The second one is a set of symptom-prescription pairs we manually extracted from the modern text book of the course Formulaology of TCM (UTF8gbsn中医方剂学) that is popularly adopted by many TCM colleges in China.
There are more cases in the first sampled test dataset (4,102 examples), but it suffers from lower quality, as this dataset was parsed with simple rules, which may not cover all exceptions. The second test dataset has been proofread and all of the prescriptions are the most classical and influential ones in the history. So the quality is much better than the first one. However, the number of the cases is limited. There are 141 symptom-prescription pairs in the second dataset. Thus we use two test sets to do evaluation to take the advantages of both data magnitude and quality.
Experiment Settings
In our experiments, we implement our models with the PyTorch toolkit . We set the embedding size of both Chinese characters in the symptoms and the herb tokens to 100. We set the hidden state size to 300, and the batch size to 20. We set the maximum length of the herb sequence to 20 because the length of nearly all the prescriptions are within this range (see Table 2 for the statistics of the length of prescriptions). Unless specifically stated, we use bidirectional gated recurrent neural networks (BiGRNN) to encode the symptoms. Adam BIBREF12 , and use the model parameters that generate the best F1 score on the development set in testing
Proposed Baseline
In this sub-section, we present the Multi-label baseline we apply. In this model, we use a BiGRNN as the encoder, which encodes symptoms in the same way as it is described in Section "Methodology" . Because the position of the herbs does not matter in the results, for the generation part, we implement a multi-label classification method to predict the herbs. We use the multi-label max-margin loss (MultiLabelMarginLoss in pytorch) as the optimization objective, because this loss function is more insensitive to the threshold, thus making the model more robust. We set the threshold to be 0.5, that is, if the probability given by the model is above 0.5 and within the top $k$ range (we set k to 20 in our experiment, same to seq2seq model), we take the tokens as answers. The way to calculate probability is shown below.
$$p(i) = \sigma (W_{o}h_{T})$$ (Eq. 23)
where $\sigma $ indicates the non-linear function $sigmoid$ , $W_{o} \in \mathbb {R}^{H \times V}$ , $H$ is the size of the hidden state produced by the encoder and $V$ is the size of the herb vocabulary. $h_{T}$ is the last hidden state produced by the encoder.
During evaluation, we choose the herbs satisfying two conditions:
The predicted probability of the herb is within top $k$ among all the herbs, where $k$ is a hyper-parameter. We set $k$ to be the same as the maximum length of seq2seq based models (20).
The predicted probability is above a threshold 0.5 (related to the max-margin).
Human Evaluation
Since medical treatment is a very complex task, we invite two professors from Beijing University of Chinese Medicine, which is one of the best Traditional Chinese Medicine academies in China. Both of the professors enjoy over five years of practicing traditional Chinese medical treatment. The evaluators are asked to evaluate the prescriptions with scores between 0 and 10. Both the textual symptoms and the standard reference are given, which is similar to the form of evaluation in a normal TCM examination. Different from the automatic evaluation method, the human evaluators focus on the potential curative effect of the candidate answers, rather than merely the literal similarity. We believe this way of evaluation is much more reasonable and close to reality.
Because the evaluation procedure is very time consuming (each item requires more than 1 minute), we only ask the evaluators to judge the results from test set 2.
As shown in Table 3 , both of the basic seq2seq model and our proposed modification are much better than the multi-label baseline. Our proposed model gets a high score of 7.3, which can be of real help to TCM practitioners when prescribing in the real life treatment.
Automatic Evaluation Results
We use micro Precision, Recall, and F1 score as the automatic metrics to evaluate the results, because the internal order between the herbs does not matter when we do not consider the prescribing process.
In Table 4 , we show the results of our proposed models as well as the baseline models. One thing that should be noted is that since the data in Test set 2 (extracted from text book) have much better quality than Test set 1, the performance on Test set 2 is much higher than it is on Test set 1, which is consistent with our instinct.
From the experiment results we can see that the baseline model multi-label has higher micro recall rate 29.72, 40.49 but much lower micro precision 10.83, 13.51. This is because unlike the seq2seq model that dynamically determines the length of the generated sequence, the output length is rigid and can only be determined by thresholds. We take the tokens within the top 20 as the answer for the multi-label model.
As to the basic seq2seq model, although it beats the multi-label model overall, the recall rate drops substantially. This problem is partly caused by the repetition problem, the basic seq2seq model sometimes predicts high frequent tokens instead of more meaningful ones. Apart from this, although the seq2seq based model is better able to model the correlation between target labels, it makes a strong assumption on the order of the target sequence. In the prescription generation task, the order between herb tokens are helpful for generating the sequence. However, since the order between the herbs does not affect the effect of the prescription, we do not consider the order when evaluating the generated sequence. We call the phenomenon that the herbs are under the “weak order”. The much too strong assumption on order can hurt the performance of the model when the correct tokens are placed in the wrong order.
In Table 5 we show the effect of applying coverage mechanism and soft loss function.
Coverage mechanism gives a sketch on the prescription. The mechanism not only encourages the model to generate novel herbs but also enables the model to generate tokens based on the already predicted ones. This can be proved by the improvement on Test set 2, where both the precision and the recall are improved over the basic seq2seq model.
The most significant improvement comes from applying the soft loss function. The soft loss function can relieve the strong assumption of order made by seq2seq model. Because predicting a correct token in the wrong position is not as harmful as predicting a completely wrong token. This simple modification gives a big improvement on both test sets for all the three evaluation metrics.
Case Study
In this subsection, we show an example generated by various models in Table 6 in test set 2 because the quality of test set 2 is much more satisfactory. The multi-label model produces too many herbs that lower the precision, we do not go deep into its results, already we report its results in the table.
For the basic seq2seq model, the result is better than multi-label baseline in this case. UTF8gbsn“柴胡” (radix bupleuri)、“葛根” (the root of kudzu vine) can be roughly matched with “恶风发热,汗出头疼” (Aversion to wind, fever, sweating, headache), “甘草” (Glycyrrhiza)、“陈皮” (dried tangerine or orange peel)、“桔梗” (Platycodon grandiflorum) can be roughly matched with “鼻鸣咽干,苔白不渴” (nasal obstruction, dry throat, white tongue coating, not thirsty), “川芎” (Ligusticum wallichii) can be used to treat the symptom of “头疼” (headache). In this case, most of the herbs can be matched with certain symptoms in the textual description. However, the problem is that unlike the reference, the composition of herbs lacks the overall design. The symptoms should not be treated independently, as they are connected to other symptoms. For example, the appearance of symptom UTF8gbsn“头疼” (headache) must be treated together with UTF8gbsn“汗出” (sweat). When there is simply headache without sweat, UTF8gbsn“川芎” (Ligusticum wallichii) may be suitable. However, since there is already sweat, this herb is not suitable in this situation. This drawback results from the fact that this model heavily relies on the attention mechanism that tries to match the current hidden state in the decoder to a part of the context in the encoder every time it predicts a token.
Translation: UTF8gbsn桂枝 - cassia twig, 芍药 - Chinese herbaceous peony 大黄 - Rhubarb, 厚朴 - Magnolia officinalis, 枳实 - Fructus Aurantii Immaturus, 芒硝 - Mirabilite, 栀子 - Cape Jasmine Fruit, 枳壳 - Fructus Aurantii, 当归 - Angelica Sinensis, 甘草 - Glycyrrhiza, 黄芩 - Scutellaria, 生姜 - ginger, 大枣 - Chinese date, 柴胡 - radix bupleuri, 葛根 - the root of kudzu vine, 陈皮 - dried tangerine or orange peel, 桔梗 - Platycodon grandiflorum, 川芎 - Ligusticum wallichii, 麻黄 - Chinese ephedra
For our proposed model, the results are much more satisfactory. UTF8gbsn“外感风寒” (Exogenous wind-cold exterior deficiency syndrome) is the reason of the disease, the symptoms UTF8gbsn“恶风发热,汗出头疼,鼻鸣咽干,苔白不渴,脉浮缓或浮弱” (Aversion to wind, fever, sweating, headache, nasal obstruction, dry throat, white tongue coating, not thirsty, floating slow pulse or floating weak pulse) are the corresponding results. The prescription generated by our proposed model can also be used to cure UTF8gbsn“外感风寒” (Exogenous wind-cold exterior deficiency syndrome), in fact UTF8gbsn“麻黄” (Chinese ephedra) and “桂枝” (cassia twig) together is a common combination to cure cold. However, UTF8gbsn“麻黄” (Chinese ephedra) is not suitable here because there is already sweat. One of the most common effect of UTF8gbsn“麻黄” (Chinese ephedra) is to make the patient sweat. Since there is already sweat, it should not be used. Compared with the basic seq2seq model, our proposed model have a sense of overall disease, rather than merely discretely focusing on individual symptoms.
From the above analysis, we can see that compared with the basic seq2seq model, our proposed soft seq2seq model is aware more of the connections between symptoms, and has a better overall view on the disease. This advantage is correspondent to the principle of prescribing in TCM that the prescription should be focusing on the UTF8gbsn“辩证” (the reason behind the symptoms) rather than the superficial UTF8gbsn“症” (symptoms).
Conclusion
In this paper, we propose a TCM prescription generation task that automatically predicts the herbs in a prescription based on the textual symptom descriptions. To our knowledge, this is the first time that this critical and practicable task has been considered. To advance the research in this task, we construct a dataset of 82,044 symptom-prescription pairs based on the TCM Prescription Knowledge Base.
Besides the automatic evaluation, we also invite professionals to evaluate the prescriptions given by various models, the results of which show that our model reaches the score of 7.3 out of 10, demonstrating the effectiveness. In the experiments, we observe that directly applying seq2seq model would lead to the repetition problem that lowers the recall rate and the strong assumption of the order between herb tokens can hurt the performance. We propose to apply the coverage mechanism and the soft loss function to solve this problem. From the experimental results, we can see that this approach alleviates the repetition problem and results in an improved recall rate. | No |
3c362bfa11c60bad6c7ea83f8753d427cda77de0 | 3c362bfa11c60bad6c7ea83f8753d427cda77de0_0 | Q: Why did they think this was a good idea?
Text: Introduction
Traditional Chinese Medicine (TCM) is one of the most important forms of medical treatment in China and the surrounding areas. TCM has accumulated large quantities of documentation and therapy records in the long history of development. Prescriptions consisting of herbal medication are the most important form of TCM treatment. TCM practitioners prescribe according to a patient's symptoms that are observed and analyzed by the practitioners themselves instead of using medical equipment, e.g., the CT. The patient takes the decoction made out of the herbal medication in the prescription. A complete prescription includes the composition of herbs, the proportion of herbs, the preparation method and the doses of the decoction. In this work, we focus on the composition part of the prescription, which is the most essential part of the prescription.
During the long history of TCM, there has been a number of therapy records or treatment guidelines in the TCM classics composed by outstanding TCM researchers and practitioners. In real life, TCM practitioners often take these classical records for reference when prescribing for the patient, which inspires us to design a model that can automatically generate prescriptions by learning from these classics. It also needs to be noted that due to the issues in actual practice, the objective of this work is to generate candidate prescriptions to facilitate the prescribing procedure instead of substituting the human practitioners completely. An example of TCM prescription is shown in Table 1 . The herbs in the prescription are organized in a weak order. By “weak order”, we mean that the effect of the herbs are not influenced by the order. However, the order of the herbs reflects the way of thinking when constructing the prescription. Therefore, the herbs are connected to each other, and the most important ones are usually listed first.
Due to the lack of digitalization and formalization, TCM has not attracted sufficient attention in the artificial intelligence community. To facilitate the studies on automatic TCM prescription generation, we collect and clean a large number of prescriptions as well as their corresponding symptom descriptions from the Internet.
Inspired by the great success of natural language generation tasks like neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 , abstractive summarization BIBREF3 , generative question answering BIBREF4 , and neural dialogue response generation BIBREF5 , BIBREF6 , we propose to adopt the end-to-end paradigm, mainly the sequence to sequence model, to tackle the task of generating TCM prescriptions based on textual symptom descriptions.
The sequence to sequence model (seq2seq) consists of an encoder that encodes the input sequence and a decoder that generates the output sequence. The success in the language generation tasks indicates that the seq2seq model can learn the semantic relation between the output sequence and the input sequence quite well. It is also a desirable characteristic for generating prescriptions according to the textual symptom description.
The prescription generation task is similar to the generative question answering (QA). In such task settings, the encoder part of the model takes in the question, and encodes the sequence of tokens into a set of hidden states, which embody the information of the question. The decoder part then iteratively generates tokens based on the information encoded in the hidden states of the encoder. The model would learn how to generate response after training on the corresponding question-answer pairs.
In the TCM prescription generation task, the textual symptom descriptions can be seen as the question and the aim of the task is to produce a set of TCM herbs that form a prescription as the answer to the question. However, the set of herbs is different from the textual answers to a question in the QA task. A difference that is most evident is that there will not be any duplication of herbs in the prescription. However, the basic seq2seq model sometimes produces the same herb tokens repeatedly when applied to the TCM prescription generation task. This phenomenon can hurt the performance of recall rate even after applying a post-process to eliminate repetitions. Because in a limited length of the prescription , the model would produce the same token over and over again, rather than real and novel ones. Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order. In this paper, we explore to automatically generate TCM prescriptions based on textual symptoms. We propose a soft seq2seq model with coverage mechanism and a novel soft loss function. The coverage mechanism is designed to make the model aware of the herbs that have already been generated while the soft loss function is to relieve the side effect of strict order assumption. In the experiment results, our proposed model beats all the baselines in professional evaluations, and we observe a large increase in both the recall rate and the F1 score compared with the basic seq2seq model.
The main contributions of this paper lie in the following three folds:
Related Work
There has not been much work concerning computational TCM. zhou2010development attempted to build a TCM clinical data warehouse so that the TCM knowledge can be analyzed and used. This is a typical way of collecting data, since the number of prescriptions given by the practitioners in the clinics is very large. However, in reality, most of the TCM doctors do not refer to the constructed digital systems, because the quality of the input data tends to be poor. Therefore, we choose prescriptions in the classics (books or documentation) of TCM. Although the available data can be fewer than the clinical data, it guarantees the quality of the prescriptions.
wang2004self attempted to construct a self-learning expert system with several simple classifiers to facilitate the TCM diagnosis procedure, Wang2013TCM proposed to use shallow neural networks and CRF based multi-labeling learning methods to model TCM inquiry process, but they only considered the disease of chronic gastritis and its taxonomy is very simple. These methods either utilize traditional data mining methods or are highly involved with expert crafted systems. Zhang2011Topic,Zhu2017TCM proposed to use LDA to model the herbs. li2017distributed proposed to learn the distributed embedding for TCM herbs with recurrent neural networks.
Methodology
Neural sequence to sequence model has proven to be very effective in a wide range of natural language generation tasks, including neural machine translation and abstractive text summarization. In this section, we first describe the definition of the TCM prescription generation task. Then, we introduce how to apply seq2seq model in the prescription composition task. Next, we show how to guide the model to generate more fruitful herbs in the setting of this task by introducing coverage mechanism. Finally, we introduce our novel soft loss function that relieves the strict assumption of order between tokens. An overview of the our final model is shown in Figure 1 .
Task Definition
Given a TCM herbal treatment dataset that consists of $N$ data samples, the $i$ -th data sample ( $x^{(i)}, p^{(i)}$ ) contains one piece of source text $x^{(i)}$ that describes the symptoms, and $M_{i}$ TCM herbs $(p_{1}^{i},p_{2}^{i}, ..., p_{M_{i}}^{i})$ that make up the herb prescription $p^{(i)}$ .
We view the symptoms as a sequence of characters $x^{(i)} = (x^{(i)}_{1}, x^{(i)}_{2}, ..., x^{(i)}_{T})$ . We do not segment the characters into words because they are mostly in traditional Chinese that uses characters as basic semantic units. The herbs $p_{1}^{i},p_{2}^{i}, ..., p_{M_{i}}^{i}$ are all different from each other.
Basic Encoder-Decoder Model
Sequence-to-sequence model was first proposed to solve the machine translation problem. The model consists of two parts, an encoder and a decoder. The encoder is bound to take in the source sequence and compress the sequence into a series of hidden states. The decoder is used to generate a sequence of target tokens based on the information embodied in the hidden states given by the encoder. Typically, both the encoder and the decoder are implemented with recurrent neural networks (RNN).
In our TCM prescription generation task, the encoder RNN converts the variable-length symptoms in character sequence $x = (x_{1},x_{2},...,x_{T})$ into a set of hidden representations $h = (h_{1},h_{2},...,h_{T})$ , by iterating the following equations along time $t$ :
$$h_{t} = f(x_{t},h_{t-1})$$ (Eq. 8)
where $f$ is a RNN family function. In our implementation, we choose gated recurrent unit (GRU BIBREF1 ) as $f$ , as the gating mechanism is expected to model long distance dependency better. Furthermore, we choose the bidirectional version of recurrent neural networks as the encoder to solve the problem that the later words get more emphasis in the unidirectional version. We concatenate both the $h_{t}$ in the forward and backward pass and get $\widehat{h_{t}}$ as the final representation of the hidden state at time step $t$ .
We get the context vector $c$ representing the whole source $x$ at the $t$ -th time through a non-linear function $q$ , normally known as the attention mechanism:
$$c_{t} = \sum _{j=1}^{T}\alpha _{tj}h_{j} \\ \alpha _{tj} = \frac{\text{exp}\left( a\left(s_{t-1},h_{j}\right)\right)}{\sum _{k=1}^{T}\text{exp}\left( a\left(s_{t-1},h_{k}\right)\right)}$$ (Eq. 9)
The context vector $c_{t}$ is calculated as a weighted sum of hidden representation produced by the encoder $\textbf {h} = (h_{1},...,h_{T})$ . $a(s_{t-1},h_{j})$ is a soft alignment function that measures the relevance between $s_{t-1}$ and $h_{j}$ . It computes how much $h_j$ is needed for the $t$ -th output word based on the previous hidden state of the decoder $s_{t-1}$ . The decoder is another RNN. It generates a variable-length sequence $y = (y_{1},y_{2}, ..., y_{T^{\prime }})$ token by token (herb), through a conditional language model:
$$s_{t} = f(s_{t-1},c_{t},Ey_{t-1}) \\ p(y_{t}|y_{1,...,t},x) = g(s_{t})$$ (Eq. 10)
where $s_{t}$ is the hidden state of the decoder RNN at time step $t$ . $f$ is also a gated recurrent unit. The non-linear function $g$ is a $softmax$ layer, which outputs the probabilities of all the herbs in the herb vocabulary. $E \in (V\times d)$ is the embedding matrix of the target tokens, $V$ is the number of herb vocabulary, $d$ is the embedding dimension. $y_{t-1}$ is the last predicted token.
In the decoder, the context vector $c_{t}$ is calculated based on the hidden state $s_{t-1}$ of the decoder at time step $t-1$ and all the hidden states in the encoder. The procedure is known as the attention mechanism. The attention mechanism is expected to supplement the information from the source sequence that is more connected to the current hidden state of the decoder instead of only depending on a fixed vector produced by the encoder.
The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence. A soft version of cross entropy loss is applied to maximize the conditional probability, which we will describe in detail.
Coverage Mechanism
Different from natural language generation tasks, there is no duplicate herb in the TCM prescription generation task. When directly applying seq2seq model in this task, the decoder tends to generate some frequently observed herbs over and over again. Although we can prune the repeated herbs through post processing by eliminating the repeated ones, it still hurts the recall performance as the maximum length of a prescription is limited. This situation is still true when we use a $<EOS>$ label to indicate where the generation should stop.
To encourage the decoder to generate more diverse and reasonable herb tokens, we propose to apply coverage mechanism to make the model aware of the already generated herbs. Coverage mechanism BIBREF7 , BIBREF8 , BIBREF9 was first proposed to help the decoder focus on the part that has not been paid much attention by feeding a fertility vector to the attention calculation, indicating how much information of the input is used.
In our model, we do not use the fertility vector to tune the attention weights. The reason is that the symptoms are related to others and altogether describe the whole disease, which is explained in Section "Introduction" . Still, inspired by its motivation, we adapt the coverage mechanism to the decoder where a coverage vector is fed to the GRU cell together with the context vector. Equation 10 is then replaced by the following ones.
$$a_{t} = \tanh (WD_{t}+b) \\ s_{t} = f(s_{t-1}, c_{t}, Ey_{t-1}, a_{t})$$ (Eq. 12)
where $a_{t}$ is the coverage vector at the $t$ -th time step in decoding. $D_{t}$ is the one-hot representation of the generated tokens until the $t$ -th time step. $W\in \mathbb {R}^{V\times H}$ is a learnable parameter matrix, where $V$ is the size of the herb vocabulary and $H$ is the size of the hidden state. By feeding the coverage vector, which is also a sketch of the generated herbs, to the GRU as part of the input, our model can softly switch more probability to the herbs that have not been predicted. This way, the model is encouraged to produce novel herbs rather than repeatedly predicting the frequently observed ones, thus increasing the recall rate.
Soft Loss Function
We argue that even though the order of the herbs matters when generating the prescription BIBREF10 , BIBREF11 , we should not strictly restrict the order. However, the traditional cross entropy loss function applied to the basic seq2seq model puts a strict assumption on the order of the labels. To deal with the task of predicting weakly ordered labels (or even unordered labels), we propose a soft loss function instead of the original hard cross entropy loss function:
$$loss = -\sum _{t}\ q^{\prime }_{t}\ log(p_t)$$ (Eq. 14)
Instead of using the original hard one-hot target probability $q_t$ , we use a soft target probability distribution $q^{\prime }_{t}$ , which is calculated according to $q_t$ and the target sequence $\mathbf {q}$ of this sample. Let $\mathbf {q_v}$ denote the bag of words representation of $\mathbf {q}$ , where only slots of the target herbs in $\mathbf {q}$ are filled with $1s$ . We use a function $\xi $ to project the original target label probability $q_t$ into a new probability distribution $q^{\prime }_{t}$0 .
$$q^{\prime }_t = \xi (q_t, \mathbf {q_v})$$ (Eq. 15)
This function $\xi $ is designed so as to decrease the harsh punishment when the model predicts the labels in the wrong order. In this paper, we apply a simple yet effective projection function as Equation 16 . This is an example implementation, and one can design more sophisticated projection functions if needed.
$$\xi (y_t,\mathbf {s}) = ((\mathbf {q_v}/M) + y_t) / 2 $$ (Eq. 16)
where $M$ is the length of $q$ . This function means that at the $t$ -th time of decoding, for each target herb token $p_i$ , we first split a probability density of $1.0$ equally across all the $l$ herbs into $1/M$ . Then, we take the average of this probability distribution and the original probability $q_t$ to be the final probability distribution at time $t$ .
Dataset Construction
We crawl the data from UTF8gbsnTCM Prescription Knowledge Base (中医方剂知识库) . This knowledge base includes comprehensive TCM documentation in the history. The database includes 710 TCM historic books or documents as well as some modern ones, consisting of 85,166 prescriptions in total. Each item in the database provides the name, the origin, the composition, the effect, the contraindications, and the preparation method. We clean and formalize the database and get 82,044 usable symptom-prescription pairs
In the process of formalization, we temporarily omit the dose information and the preparation method description, as we are mainly concerned with the composition. Because the names of the herbs have evolved a lot, we conclude heuristic rules as well as specific projection rules to project some rarely seen herbs to their similar forms that are normally referred to. There are also prescriptions that refer to the name of other prescriptions. We simply substitute these names with their constituents.
To make the experiment result more robust, we conduct our experiments on two separate test datasets. The first one is a subset of the data described above. We randomly split the whole data into three parts, the training data (90%), the development data (5%) and the test data (5%). The second one is a set of symptom-prescription pairs we manually extracted from the modern text book of the course Formulaology of TCM (UTF8gbsn中医方剂学) that is popularly adopted by many TCM colleges in China.
There are more cases in the first sampled test dataset (4,102 examples), but it suffers from lower quality, as this dataset was parsed with simple rules, which may not cover all exceptions. The second test dataset has been proofread and all of the prescriptions are the most classical and influential ones in the history. So the quality is much better than the first one. However, the number of the cases is limited. There are 141 symptom-prescription pairs in the second dataset. Thus we use two test sets to do evaluation to take the advantages of both data magnitude and quality.
Experiment Settings
In our experiments, we implement our models with the PyTorch toolkit . We set the embedding size of both Chinese characters in the symptoms and the herb tokens to 100. We set the hidden state size to 300, and the batch size to 20. We set the maximum length of the herb sequence to 20 because the length of nearly all the prescriptions are within this range (see Table 2 for the statistics of the length of prescriptions). Unless specifically stated, we use bidirectional gated recurrent neural networks (BiGRNN) to encode the symptoms. Adam BIBREF12 , and use the model parameters that generate the best F1 score on the development set in testing
Proposed Baseline
In this sub-section, we present the Multi-label baseline we apply. In this model, we use a BiGRNN as the encoder, which encodes symptoms in the same way as it is described in Section "Methodology" . Because the position of the herbs does not matter in the results, for the generation part, we implement a multi-label classification method to predict the herbs. We use the multi-label max-margin loss (MultiLabelMarginLoss in pytorch) as the optimization objective, because this loss function is more insensitive to the threshold, thus making the model more robust. We set the threshold to be 0.5, that is, if the probability given by the model is above 0.5 and within the top $k$ range (we set k to 20 in our experiment, same to seq2seq model), we take the tokens as answers. The way to calculate probability is shown below.
$$p(i) = \sigma (W_{o}h_{T})$$ (Eq. 23)
where $\sigma $ indicates the non-linear function $sigmoid$ , $W_{o} \in \mathbb {R}^{H \times V}$ , $H$ is the size of the hidden state produced by the encoder and $V$ is the size of the herb vocabulary. $h_{T}$ is the last hidden state produced by the encoder.
During evaluation, we choose the herbs satisfying two conditions:
The predicted probability of the herb is within top $k$ among all the herbs, where $k$ is a hyper-parameter. We set $k$ to be the same as the maximum length of seq2seq based models (20).
The predicted probability is above a threshold 0.5 (related to the max-margin).
Human Evaluation
Since medical treatment is a very complex task, we invite two professors from Beijing University of Chinese Medicine, which is one of the best Traditional Chinese Medicine academies in China. Both of the professors enjoy over five years of practicing traditional Chinese medical treatment. The evaluators are asked to evaluate the prescriptions with scores between 0 and 10. Both the textual symptoms and the standard reference are given, which is similar to the form of evaluation in a normal TCM examination. Different from the automatic evaluation method, the human evaluators focus on the potential curative effect of the candidate answers, rather than merely the literal similarity. We believe this way of evaluation is much more reasonable and close to reality.
Because the evaluation procedure is very time consuming (each item requires more than 1 minute), we only ask the evaluators to judge the results from test set 2.
As shown in Table 3 , both of the basic seq2seq model and our proposed modification are much better than the multi-label baseline. Our proposed model gets a high score of 7.3, which can be of real help to TCM practitioners when prescribing in the real life treatment.
Automatic Evaluation Results
We use micro Precision, Recall, and F1 score as the automatic metrics to evaluate the results, because the internal order between the herbs does not matter when we do not consider the prescribing process.
In Table 4 , we show the results of our proposed models as well as the baseline models. One thing that should be noted is that since the data in Test set 2 (extracted from text book) have much better quality than Test set 1, the performance on Test set 2 is much higher than it is on Test set 1, which is consistent with our instinct.
From the experiment results we can see that the baseline model multi-label has higher micro recall rate 29.72, 40.49 but much lower micro precision 10.83, 13.51. This is because unlike the seq2seq model that dynamically determines the length of the generated sequence, the output length is rigid and can only be determined by thresholds. We take the tokens within the top 20 as the answer for the multi-label model.
As to the basic seq2seq model, although it beats the multi-label model overall, the recall rate drops substantially. This problem is partly caused by the repetition problem, the basic seq2seq model sometimes predicts high frequent tokens instead of more meaningful ones. Apart from this, although the seq2seq based model is better able to model the correlation between target labels, it makes a strong assumption on the order of the target sequence. In the prescription generation task, the order between herb tokens are helpful for generating the sequence. However, since the order between the herbs does not affect the effect of the prescription, we do not consider the order when evaluating the generated sequence. We call the phenomenon that the herbs are under the “weak order”. The much too strong assumption on order can hurt the performance of the model when the correct tokens are placed in the wrong order.
In Table 5 we show the effect of applying coverage mechanism and soft loss function.
Coverage mechanism gives a sketch on the prescription. The mechanism not only encourages the model to generate novel herbs but also enables the model to generate tokens based on the already predicted ones. This can be proved by the improvement on Test set 2, where both the precision and the recall are improved over the basic seq2seq model.
The most significant improvement comes from applying the soft loss function. The soft loss function can relieve the strong assumption of order made by seq2seq model. Because predicting a correct token in the wrong position is not as harmful as predicting a completely wrong token. This simple modification gives a big improvement on both test sets for all the three evaluation metrics.
Case Study
In this subsection, we show an example generated by various models in Table 6 in test set 2 because the quality of test set 2 is much more satisfactory. The multi-label model produces too many herbs that lower the precision, we do not go deep into its results, already we report its results in the table.
For the basic seq2seq model, the result is better than multi-label baseline in this case. UTF8gbsn“柴胡” (radix bupleuri)、“葛根” (the root of kudzu vine) can be roughly matched with “恶风发热,汗出头疼” (Aversion to wind, fever, sweating, headache), “甘草” (Glycyrrhiza)、“陈皮” (dried tangerine or orange peel)、“桔梗” (Platycodon grandiflorum) can be roughly matched with “鼻鸣咽干,苔白不渴” (nasal obstruction, dry throat, white tongue coating, not thirsty), “川芎” (Ligusticum wallichii) can be used to treat the symptom of “头疼” (headache). In this case, most of the herbs can be matched with certain symptoms in the textual description. However, the problem is that unlike the reference, the composition of herbs lacks the overall design. The symptoms should not be treated independently, as they are connected to other symptoms. For example, the appearance of symptom UTF8gbsn“头疼” (headache) must be treated together with UTF8gbsn“汗出” (sweat). When there is simply headache without sweat, UTF8gbsn“川芎” (Ligusticum wallichii) may be suitable. However, since there is already sweat, this herb is not suitable in this situation. This drawback results from the fact that this model heavily relies on the attention mechanism that tries to match the current hidden state in the decoder to a part of the context in the encoder every time it predicts a token.
Translation: UTF8gbsn桂枝 - cassia twig, 芍药 - Chinese herbaceous peony 大黄 - Rhubarb, 厚朴 - Magnolia officinalis, 枳实 - Fructus Aurantii Immaturus, 芒硝 - Mirabilite, 栀子 - Cape Jasmine Fruit, 枳壳 - Fructus Aurantii, 当归 - Angelica Sinensis, 甘草 - Glycyrrhiza, 黄芩 - Scutellaria, 生姜 - ginger, 大枣 - Chinese date, 柴胡 - radix bupleuri, 葛根 - the root of kudzu vine, 陈皮 - dried tangerine or orange peel, 桔梗 - Platycodon grandiflorum, 川芎 - Ligusticum wallichii, 麻黄 - Chinese ephedra
For our proposed model, the results are much more satisfactory. UTF8gbsn“外感风寒” (Exogenous wind-cold exterior deficiency syndrome) is the reason of the disease, the symptoms UTF8gbsn“恶风发热,汗出头疼,鼻鸣咽干,苔白不渴,脉浮缓或浮弱” (Aversion to wind, fever, sweating, headache, nasal obstruction, dry throat, white tongue coating, not thirsty, floating slow pulse or floating weak pulse) are the corresponding results. The prescription generated by our proposed model can also be used to cure UTF8gbsn“外感风寒” (Exogenous wind-cold exterior deficiency syndrome), in fact UTF8gbsn“麻黄” (Chinese ephedra) and “桂枝” (cassia twig) together is a common combination to cure cold. However, UTF8gbsn“麻黄” (Chinese ephedra) is not suitable here because there is already sweat. One of the most common effect of UTF8gbsn“麻黄” (Chinese ephedra) is to make the patient sweat. Since there is already sweat, it should not be used. Compared with the basic seq2seq model, our proposed model have a sense of overall disease, rather than merely discretely focusing on individual symptoms.
From the above analysis, we can see that compared with the basic seq2seq model, our proposed soft seq2seq model is aware more of the connections between symptoms, and has a better overall view on the disease. This advantage is correspondent to the principle of prescribing in TCM that the prescription should be focusing on the UTF8gbsn“辩证” (the reason behind the symptoms) rather than the superficial UTF8gbsn“症” (symptoms).
Conclusion
In this paper, we propose a TCM prescription generation task that automatically predicts the herbs in a prescription based on the textual symptom descriptions. To our knowledge, this is the first time that this critical and practicable task has been considered. To advance the research in this task, we construct a dataset of 82,044 symptom-prescription pairs based on the TCM Prescription Knowledge Base.
Besides the automatic evaluation, we also invite professionals to evaluate the prescriptions given by various models, the results of which show that our model reaches the score of 7.3 out of 10, demonstrating the effectiveness. In the experiments, we observe that directly applying seq2seq model would lead to the repetition problem that lowers the recall rate and the strong assumption of the order between herb tokens can hurt the performance. We propose to apply the coverage mechanism and the soft loss function to solve this problem. From the experimental results, we can see that this approach alleviates the repetition problem and results in an improved recall rate. | They think it will help human TCM practitioners make prescriptions. |
e78a47aec37d9a3bec5a18706b0a462c148c118b | e78a47aec37d9a3bec5a18706b0a462c148c118b_0 | Q: How many languages are included in the tweets?
Text: Introduction
Human languages are intertwined with their cultures and societies, having evolved together, reflecting them and in turn shaping them BIBREF0 , BIBREF1 . Part-of-day nouns (e.g. ‘morning’ or ‘night’) are an example of this, as their meaning depends on how each language's speakers organize their daily schedule. For example, while the morning in English-speaking countries is assumed to end at noon, the Spanish term (‘mañana’) is understood to span until lunch time, which normally takes place between 13:00 and 15:00 in Spain. It is fair to relate this difference to cultural (lunch being the main meal of the day in Spain, as opposed to countries like the uk, and therefore being a milestone in the daily timetable) and sociopolitical factors (the late lunch time being influenced by work schedules and the displacement of the Spanish time zones with respect to solar time). Similar differences have been noted for different pairs of languages BIBREF2 and for cultures using the same language BIBREF3 , based on manual study, field research and interviews with natives. Work on automatically extracting the semantics of part-of-day nouns is scarce, as classic corpora are not timestamped. Reiter2003a,Reiter2003b overcome it by analyzing weather forecasts and aligning them to timestamped simulations, giving approximate groundings for time-of-day nouns and showing idiolectal variation on the term ‘evening’, but the work is limited to English.
The relation between language and sociocultural factors implies that the semantics of part-of-day nouns (e.g. 'end of the morning') cannot be studied in isolation from social habits (e.g. 'typical lunch time'). A relevant study of such habits is done by walch2016global, who develop an app to collect sleep habits from users worldwide. While they do not study the meaning of words, their insights are used for validation.
We propose a new approach to study the semantics of part-of-day nouns by exploiting Twitter and the time-specific greetings (e.g. ‘good morning’) used in different cultures. By mining tweets with these greetings, we obtain a large, worldwide sample of their usage. Since many tweets come with time and geolocation metadata, we can know the local time and country at which each one was emitted. The main contribution of the paper is to show how it is possible to learn the semantics of these terms in a much more extensive way than previous work, at a global scale, with less effort and allowing statistical testing of differences in usage between terms, countries and languages.
Materials and methods
To ground the semantics of greetings we used 5 terms as seeds: ‘good morning’, ‘good afternoon’, ‘good evening’, ‘good night’ and ‘hello’ (a time-unspecific greeting used for comparison). We translated them to 53 languages and variants using Bing translator. We use italics to refer to greetings irrespective of the language. 172,802,620 tweets were collected from Sept. 2 to Dec. 7 2016.
For some languages (e.g. Spanish), there is no differentiation between ‘good evening’ and ‘good night’, and they both are translated to the same expression. For some others, some expressions cannot be considered equivalent, e.g. ‘good morning’ is translated to ‘bonjour’ in French, which is however commonly used as ‘hello’, or simply as ‘good day’.
Text preprocessing is not necessary: we rely on metadata, not on the tweet itself, and only the seed words are needed to categorize tweets within a part of day. To clean up the data, we removed retweets, as they last for hours, biasing the temporal analysis. Duplicate tweets were kept, as similar messages from different days and users (e.g. ‘good night!’) are needed for the task at hand. Tweets need to be associated with a timestamp and country-level geolocation. Tweets have a creation time, composed of a utc time and a utc offset that varies depending on the time zone. However, most tweets are not geolocated and we must rely on the data provided by the user. This may be fake or incomplete, e.g. specifying only a village. We used fine-grained databases to do the mapping to the country level location and performed a sanity check, comparing the Twitter offset to the valid set of offsets for that country, to reduce the amount of wrongly geolocated tweets. Comparing the solar and standard time could provide more insights, but this requires a fine-grained geolocation of the tweets. We obtained a dataset of 10,523,349 elements, available at https://github.com/aghie/peoples2018grounding: 4,503,077 good morning's, 599,586 good afternoon's, 214,231 good evening's, 880,003 good night's and 4,359,797 hello's.
Results and validation
Given a country, some of the tweets are written in foreign languages for reasons like tourism or immigration. This paper refers to tweets written in official or de facto languages, unless otherwise specified. Also, analyzing differences according to criteria such as gender or solar time can be relevant. As determining the impact of all those is a challenge on its own, we focus on the primary research question: can we learn semantics of the part-of-day nouns from simple analysis of tweets? To verify data quality, good morning tweets were revised: out of 1 000 random tweets from the usa, 97.9% were legitimate greetings and among the rest, some reflected somehow that the user just started the day (e.g ‘Didn't get any good morning sms’). We did the same for Spain (98,1% legitimate), Brazil (97.8%) and India (99.6%).
Existing work and dated events are used to ratify the results presented below.
Worldwide average greeting times
Table TABREF7 shows the average greeting times for the countries from which we collected more data. Asian, African and American countries tend to begin the day earlier than Europe (with exceptions, e.g. Germany). The table reflects that countries in southern Europe (e.g. Spain, Portugal or Greece) start the day later than the northern ones (the Netherlands or uk). For some countries, e.g. France, this information is known to be biased, as good morning (‘bonjour’) is used all along the day. A validation at a fine-grained scale is unfeasible, but the results at the country level are in line with Figure 3 of walch2016global, e.g., they state that Japan, the usa or Germany have earlier wake up times than Spain, Brazil or Turkey.
The average greeting times for good afternoon reveal insights that may stem from cultural differences (e.g. lunch break time). Anglo-Saxon and South Asian countries have the earliest afternoon (with averages between 13:00 and 14:00), while in Mediterranean countries the morning lasts longer (average greeting times for good afternoon around 15:00 or 16:00). A number of countries under the influence of the United Kingdom, such as the United States, Pakistan or India show earlier afternoon times. The opposite happens in South America, historically influenced by Portuguese and Spanish colonialism during the Early modern period, which exhibits later afternoon times.
This poses interesting questions for future work, such as whether there is a particular reason that could justify this behavior, like having more similar cuisine practices. In this context, the adoption of food practices in colonialism has been already studied by anthropologists and historians BIBREF4 . trigg2004food points out how in the early period of the Spanish colonialism in the Americas, they `civilized' the Indigenous community by making them adopt manners, dress and customs. She points that the role of food was specially relevant due to its large social component, and was not limited to the way the food was eaten, but also prepared, served and consumed.
Twitter also reflects differences between countries regarding night life. On the one hand, Anglo-Saxon countries wish good night earlier (from 19:49 in the uk to 21:10 in Canada) than other societies. On the other hand, southern European countries go to bed later, and some of them even wish a good night after midnight (e.g. Spain). Comparing to BIBREF5 , we find similar tendencies. For example, in their study Spain, Turkey or Brazil use the smartphone until later than Canada, the usa or the uk, and therefore they go later to bed. Our Twitter approach also captures the particular case of Japanese mentioned by BIBREF5 : they wake up very early, but use the smartphone until late in the night, suggesting a later bed time.
A fine-grained analysis shows how Twitter captures other cultural and working differences. Figure FIGREF8 charts the average day time for good morning for the usa, Brazil, Spain and India during part of the polling period. The time peaks in the weekends for many of the countries, showing that Twitter captures how business and work are reduced during holidays, resulting in later wake up times.
However, this is not visible in some countries where working conditions are sometimes questioned BIBREF6 : for India the weekend peak is less pronounced, which can be considered as an indicator that a significant part of its population does not enjoy work-free weekends.
The usage of part-of-day expressions can be helpful to understand more complex issues, such as how foreigners integrate into a country and adapt to its daily schedule. We take the usa as example, as it has a large foreign community of Spanish speakers, mainly from Mexico (and in a smaller proportion from other Latin American countries). If we calculate the average day time for the Spanish form of ‘good morning’ (‘buenos días’) in the usa, we obtain that the result is 08:09, while the corresponding English greeting's average time is 08:33. This is reinforced by Figure FIGREF10 , where ‘buenos días’ average day time is consistently lower than ‘good morning’. This would be in line to their presence in low-wage jobs that require to wake up earlier, e.g. waiter, cleaning or construction work BIBREF7 , BIBREF8 .
It is worth noting that, assuming that these ‘buenos días’ greetings come from latinos, those in the usa wake up even earlier than in their countries of origin (see Table TABREF7 ).
Figure FIGREF8 also shows how national holidays influence societies. For example, Nov. 2 (Day of the Dead) and Nov. 15 (Proclamation of the Republic) are holidays in Brazil, producing a peak in that country's graph similar to the behavior in the weekends. Similarly, Nov. 1 (All Saints' Day) and Dec. 6 (Constitution Day) are holidays in Spain and similar peaks are observed too. From Figure FIGREF10 we can see how Thanksgiving (Nov. 24 in 2016) reflects a four-day weekend in the usa: many businesses allow employees to take this holiday from Thursday, resulting into a gradual and increasing peak that spans until Sunday. This is captured by the English good mornings, but not by the Spanish ones. The day after the usa 2016 elections (Nov. 9), a valley occurs on the good morning time for the States (Figure FIGREF8 ). The winner was not known until 03:00, suggesting that the distribution of greetings reflects social behaviors in other special events.
Daily analysis
Twitter can be used to do a time-of-day analysis, e.g., as said in § SECREF6 , ‘bonjour’ is assumed to be used all along the day. To test this, we take Canada, where French and English are official languages. Figure FIGREF12 shows how ‘bonjour’ and ‘salut’ (‘hello’) are used all along the day, while ‘good morning’ is used in the morning hours. English and French hello's share a similar distribution.
Figure FIGREF13 shows a greeting area chart for the usa, showing how ‘good evening’ and ‘good afternoon’ are well differentiated, with the transition happening over 16:30. This contrasts to countries such as Spain (Figure FIGREF14 ), where the language has a single word (‘tarde’) for ‘evening’ and ‘afternoon’, whose greeting spans from over 14:00, as the morning ends late (see § SECREF1 ), to 21:00.
Area plots like these give a clear picture of the semantics of part-of-day nouns, as they depict the exact times when they are used. The precise semantics can be grounded more rigorously using statistical testing to know the exact time intervals at which people significantly use a specific greeting.
For example, to know when to switch from good morning to good afternoon in Spanish, we can: (1) group the number of ‘buenos días’ (‘good morning’) and ‘buenas tardes’ (‘good afternoon’) by intervals of 10 minutes, and (2) apply a binomial test to each interval, to determine if one of the greetings is significantly more likely to occur than the other (assuming equal probability of occurrence). For example, for Spain, we obtain that the morning ends at 14:00 (p-value= INLINEFORM0 at 14:00, 0.09 at 14:10) and the afternoon starts at 14:40 (p-value becomes statistically significant again with INLINEFORM1 , showing a significant majority of good afternoon).
Conclusion
We crawled Twitter to study the semantics of part-of-day nouns in different countries and societies, showed examples from the polled period and ratified them against existing research and dated events. For space reasons we cannot show insights for all scenarios, but full results are at https://github.com/aghie/peoples2018grounding.
Acknowledgments
DV and CGR receive funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01). | Unanswerable |
351510da69ab6879df5ff5c7c5f49a8a7aea4632 | 351510da69ab6879df5ff5c7c5f49a8a7aea4632_0 | Q: What languages are explored?
Text: Introduction
Human languages are intertwined with their cultures and societies, having evolved together, reflecting them and in turn shaping them BIBREF0 , BIBREF1 . Part-of-day nouns (e.g. ‘morning’ or ‘night’) are an example of this, as their meaning depends on how each language's speakers organize their daily schedule. For example, while the morning in English-speaking countries is assumed to end at noon, the Spanish term (‘mañana’) is understood to span until lunch time, which normally takes place between 13:00 and 15:00 in Spain. It is fair to relate this difference to cultural (lunch being the main meal of the day in Spain, as opposed to countries like the uk, and therefore being a milestone in the daily timetable) and sociopolitical factors (the late lunch time being influenced by work schedules and the displacement of the Spanish time zones with respect to solar time). Similar differences have been noted for different pairs of languages BIBREF2 and for cultures using the same language BIBREF3 , based on manual study, field research and interviews with natives. Work on automatically extracting the semantics of part-of-day nouns is scarce, as classic corpora are not timestamped. Reiter2003a,Reiter2003b overcome it by analyzing weather forecasts and aligning them to timestamped simulations, giving approximate groundings for time-of-day nouns and showing idiolectal variation on the term ‘evening’, but the work is limited to English.
The relation between language and sociocultural factors implies that the semantics of part-of-day nouns (e.g. 'end of the morning') cannot be studied in isolation from social habits (e.g. 'typical lunch time'). A relevant study of such habits is done by walch2016global, who develop an app to collect sleep habits from users worldwide. While they do not study the meaning of words, their insights are used for validation.
We propose a new approach to study the semantics of part-of-day nouns by exploiting Twitter and the time-specific greetings (e.g. ‘good morning’) used in different cultures. By mining tweets with these greetings, we obtain a large, worldwide sample of their usage. Since many tweets come with time and geolocation metadata, we can know the local time and country at which each one was emitted. The main contribution of the paper is to show how it is possible to learn the semantics of these terms in a much more extensive way than previous work, at a global scale, with less effort and allowing statistical testing of differences in usage between terms, countries and languages.
Materials and methods
To ground the semantics of greetings we used 5 terms as seeds: ‘good morning’, ‘good afternoon’, ‘good evening’, ‘good night’ and ‘hello’ (a time-unspecific greeting used for comparison). We translated them to 53 languages and variants using Bing translator. We use italics to refer to greetings irrespective of the language. 172,802,620 tweets were collected from Sept. 2 to Dec. 7 2016.
For some languages (e.g. Spanish), there is no differentiation between ‘good evening’ and ‘good night’, and they both are translated to the same expression. For some others, some expressions cannot be considered equivalent, e.g. ‘good morning’ is translated to ‘bonjour’ in French, which is however commonly used as ‘hello’, or simply as ‘good day’.
Text preprocessing is not necessary: we rely on metadata, not on the tweet itself, and only the seed words are needed to categorize tweets within a part of day. To clean up the data, we removed retweets, as they last for hours, biasing the temporal analysis. Duplicate tweets were kept, as similar messages from different days and users (e.g. ‘good night!’) are needed for the task at hand. Tweets need to be associated with a timestamp and country-level geolocation. Tweets have a creation time, composed of a utc time and a utc offset that varies depending on the time zone. However, most tweets are not geolocated and we must rely on the data provided by the user. This may be fake or incomplete, e.g. specifying only a village. We used fine-grained databases to do the mapping to the country level location and performed a sanity check, comparing the Twitter offset to the valid set of offsets for that country, to reduce the amount of wrongly geolocated tweets. Comparing the solar and standard time could provide more insights, but this requires a fine-grained geolocation of the tweets. We obtained a dataset of 10,523,349 elements, available at https://github.com/aghie/peoples2018grounding: 4,503,077 good morning's, 599,586 good afternoon's, 214,231 good evening's, 880,003 good night's and 4,359,797 hello's.
Results and validation
Given a country, some of the tweets are written in foreign languages for reasons like tourism or immigration. This paper refers to tweets written in official or de facto languages, unless otherwise specified. Also, analyzing differences according to criteria such as gender or solar time can be relevant. As determining the impact of all those is a challenge on its own, we focus on the primary research question: can we learn semantics of the part-of-day nouns from simple analysis of tweets? To verify data quality, good morning tweets were revised: out of 1 000 random tweets from the usa, 97.9% were legitimate greetings and among the rest, some reflected somehow that the user just started the day (e.g ‘Didn't get any good morning sms’). We did the same for Spain (98,1% legitimate), Brazil (97.8%) and India (99.6%).
Existing work and dated events are used to ratify the results presented below.
Worldwide average greeting times
Table TABREF7 shows the average greeting times for the countries from which we collected more data. Asian, African and American countries tend to begin the day earlier than Europe (with exceptions, e.g. Germany). The table reflects that countries in southern Europe (e.g. Spain, Portugal or Greece) start the day later than the northern ones (the Netherlands or uk). For some countries, e.g. France, this information is known to be biased, as good morning (‘bonjour’) is used all along the day. A validation at a fine-grained scale is unfeasible, but the results at the country level are in line with Figure 3 of walch2016global, e.g., they state that Japan, the usa or Germany have earlier wake up times than Spain, Brazil or Turkey.
The average greeting times for good afternoon reveal insights that may stem from cultural differences (e.g. lunch break time). Anglo-Saxon and South Asian countries have the earliest afternoon (with averages between 13:00 and 14:00), while in Mediterranean countries the morning lasts longer (average greeting times for good afternoon around 15:00 or 16:00). A number of countries under the influence of the United Kingdom, such as the United States, Pakistan or India show earlier afternoon times. The opposite happens in South America, historically influenced by Portuguese and Spanish colonialism during the Early modern period, which exhibits later afternoon times.
This poses interesting questions for future work, such as whether there is a particular reason that could justify this behavior, like having more similar cuisine practices. In this context, the adoption of food practices in colonialism has been already studied by anthropologists and historians BIBREF4 . trigg2004food points out how in the early period of the Spanish colonialism in the Americas, they `civilized' the Indigenous community by making them adopt manners, dress and customs. She points that the role of food was specially relevant due to its large social component, and was not limited to the way the food was eaten, but also prepared, served and consumed.
Twitter also reflects differences between countries regarding night life. On the one hand, Anglo-Saxon countries wish good night earlier (from 19:49 in the uk to 21:10 in Canada) than other societies. On the other hand, southern European countries go to bed later, and some of them even wish a good night after midnight (e.g. Spain). Comparing to BIBREF5 , we find similar tendencies. For example, in their study Spain, Turkey or Brazil use the smartphone until later than Canada, the usa or the uk, and therefore they go later to bed. Our Twitter approach also captures the particular case of Japanese mentioned by BIBREF5 : they wake up very early, but use the smartphone until late in the night, suggesting a later bed time.
A fine-grained analysis shows how Twitter captures other cultural and working differences. Figure FIGREF8 charts the average day time for good morning for the usa, Brazil, Spain and India during part of the polling period. The time peaks in the weekends for many of the countries, showing that Twitter captures how business and work are reduced during holidays, resulting in later wake up times.
However, this is not visible in some countries where working conditions are sometimes questioned BIBREF6 : for India the weekend peak is less pronounced, which can be considered as an indicator that a significant part of its population does not enjoy work-free weekends.
The usage of part-of-day expressions can be helpful to understand more complex issues, such as how foreigners integrate into a country and adapt to its daily schedule. We take the usa as example, as it has a large foreign community of Spanish speakers, mainly from Mexico (and in a smaller proportion from other Latin American countries). If we calculate the average day time for the Spanish form of ‘good morning’ (‘buenos días’) in the usa, we obtain that the result is 08:09, while the corresponding English greeting's average time is 08:33. This is reinforced by Figure FIGREF10 , where ‘buenos días’ average day time is consistently lower than ‘good morning’. This would be in line to their presence in low-wage jobs that require to wake up earlier, e.g. waiter, cleaning or construction work BIBREF7 , BIBREF8 .
It is worth noting that, assuming that these ‘buenos días’ greetings come from latinos, those in the usa wake up even earlier than in their countries of origin (see Table TABREF7 ).
Figure FIGREF8 also shows how national holidays influence societies. For example, Nov. 2 (Day of the Dead) and Nov. 15 (Proclamation of the Republic) are holidays in Brazil, producing a peak in that country's graph similar to the behavior in the weekends. Similarly, Nov. 1 (All Saints' Day) and Dec. 6 (Constitution Day) are holidays in Spain and similar peaks are observed too. From Figure FIGREF10 we can see how Thanksgiving (Nov. 24 in 2016) reflects a four-day weekend in the usa: many businesses allow employees to take this holiday from Thursday, resulting into a gradual and increasing peak that spans until Sunday. This is captured by the English good mornings, but not by the Spanish ones. The day after the usa 2016 elections (Nov. 9), a valley occurs on the good morning time for the States (Figure FIGREF8 ). The winner was not known until 03:00, suggesting that the distribution of greetings reflects social behaviors in other special events.
Daily analysis
Twitter can be used to do a time-of-day analysis, e.g., as said in § SECREF6 , ‘bonjour’ is assumed to be used all along the day. To test this, we take Canada, where French and English are official languages. Figure FIGREF12 shows how ‘bonjour’ and ‘salut’ (‘hello’) are used all along the day, while ‘good morning’ is used in the morning hours. English and French hello's share a similar distribution.
Figure FIGREF13 shows a greeting area chart for the usa, showing how ‘good evening’ and ‘good afternoon’ are well differentiated, with the transition happening over 16:30. This contrasts to countries such as Spain (Figure FIGREF14 ), where the language has a single word (‘tarde’) for ‘evening’ and ‘afternoon’, whose greeting spans from over 14:00, as the morning ends late (see § SECREF1 ), to 21:00.
Area plots like these give a clear picture of the semantics of part-of-day nouns, as they depict the exact times when they are used. The precise semantics can be grounded more rigorously using statistical testing to know the exact time intervals at which people significantly use a specific greeting.
For example, to know when to switch from good morning to good afternoon in Spanish, we can: (1) group the number of ‘buenos días’ (‘good morning’) and ‘buenas tardes’ (‘good afternoon’) by intervals of 10 minutes, and (2) apply a binomial test to each interval, to determine if one of the greetings is significantly more likely to occur than the other (assuming equal probability of occurrence). For example, for Spain, we obtain that the morning ends at 14:00 (p-value= INLINEFORM0 at 14:00, 0.09 at 14:10) and the afternoon starts at 14:40 (p-value becomes statistically significant again with INLINEFORM1 , showing a significant majority of good afternoon).
Conclusion
We crawled Twitter to study the semantics of part-of-day nouns in different countries and societies, showed examples from the polled period and ratified them against existing research and dated events. For space reasons we cannot show insights for all scenarios, but full results are at https://github.com/aghie/peoples2018grounding.
Acknowledgments
DV and CGR receive funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01). | Unanswerable |
d43e868cae91b3dc393c05c55da0754b0fb3a46a | d43e868cae91b3dc393c05c55da0754b0fb3a46a_0 | Q: Which countries did they look at?
Text: Introduction
Human languages are intertwined with their cultures and societies, having evolved together, reflecting them and in turn shaping them BIBREF0 , BIBREF1 . Part-of-day nouns (e.g. ‘morning’ or ‘night’) are an example of this, as their meaning depends on how each language's speakers organize their daily schedule. For example, while the morning in English-speaking countries is assumed to end at noon, the Spanish term (‘mañana’) is understood to span until lunch time, which normally takes place between 13:00 and 15:00 in Spain. It is fair to relate this difference to cultural (lunch being the main meal of the day in Spain, as opposed to countries like the uk, and therefore being a milestone in the daily timetable) and sociopolitical factors (the late lunch time being influenced by work schedules and the displacement of the Spanish time zones with respect to solar time). Similar differences have been noted for different pairs of languages BIBREF2 and for cultures using the same language BIBREF3 , based on manual study, field research and interviews with natives. Work on automatically extracting the semantics of part-of-day nouns is scarce, as classic corpora are not timestamped. Reiter2003a,Reiter2003b overcome it by analyzing weather forecasts and aligning them to timestamped simulations, giving approximate groundings for time-of-day nouns and showing idiolectal variation on the term ‘evening’, but the work is limited to English.
The relation between language and sociocultural factors implies that the semantics of part-of-day nouns (e.g. 'end of the morning') cannot be studied in isolation from social habits (e.g. 'typical lunch time'). A relevant study of such habits is done by walch2016global, who develop an app to collect sleep habits from users worldwide. While they do not study the meaning of words, their insights are used for validation.
We propose a new approach to study the semantics of part-of-day nouns by exploiting Twitter and the time-specific greetings (e.g. ‘good morning’) used in different cultures. By mining tweets with these greetings, we obtain a large, worldwide sample of their usage. Since many tweets come with time and geolocation metadata, we can know the local time and country at which each one was emitted. The main contribution of the paper is to show how it is possible to learn the semantics of these terms in a much more extensive way than previous work, at a global scale, with less effort and allowing statistical testing of differences in usage between terms, countries and languages.
Materials and methods
To ground the semantics of greetings we used 5 terms as seeds: ‘good morning’, ‘good afternoon’, ‘good evening’, ‘good night’ and ‘hello’ (a time-unspecific greeting used for comparison). We translated them to 53 languages and variants using Bing translator. We use italics to refer to greetings irrespective of the language. 172,802,620 tweets were collected from Sept. 2 to Dec. 7 2016.
For some languages (e.g. Spanish), there is no differentiation between ‘good evening’ and ‘good night’, and they both are translated to the same expression. For some others, some expressions cannot be considered equivalent, e.g. ‘good morning’ is translated to ‘bonjour’ in French, which is however commonly used as ‘hello’, or simply as ‘good day’.
Text preprocessing is not necessary: we rely on metadata, not on the tweet itself, and only the seed words are needed to categorize tweets within a part of day. To clean up the data, we removed retweets, as they last for hours, biasing the temporal analysis. Duplicate tweets were kept, as similar messages from different days and users (e.g. ‘good night!’) are needed for the task at hand. Tweets need to be associated with a timestamp and country-level geolocation. Tweets have a creation time, composed of a utc time and a utc offset that varies depending on the time zone. However, most tweets are not geolocated and we must rely on the data provided by the user. This may be fake or incomplete, e.g. specifying only a village. We used fine-grained databases to do the mapping to the country level location and performed a sanity check, comparing the Twitter offset to the valid set of offsets for that country, to reduce the amount of wrongly geolocated tweets. Comparing the solar and standard time could provide more insights, but this requires a fine-grained geolocation of the tweets. We obtained a dataset of 10,523,349 elements, available at https://github.com/aghie/peoples2018grounding: 4,503,077 good morning's, 599,586 good afternoon's, 214,231 good evening's, 880,003 good night's and 4,359,797 hello's.
Results and validation
Given a country, some of the tweets are written in foreign languages for reasons like tourism or immigration. This paper refers to tweets written in official or de facto languages, unless otherwise specified. Also, analyzing differences according to criteria such as gender or solar time can be relevant. As determining the impact of all those is a challenge on its own, we focus on the primary research question: can we learn semantics of the part-of-day nouns from simple analysis of tweets? To verify data quality, good morning tweets were revised: out of 1 000 random tweets from the usa, 97.9% were legitimate greetings and among the rest, some reflected somehow that the user just started the day (e.g ‘Didn't get any good morning sms’). We did the same for Spain (98,1% legitimate), Brazil (97.8%) and India (99.6%).
Existing work and dated events are used to ratify the results presented below.
Worldwide average greeting times
Table TABREF7 shows the average greeting times for the countries from which we collected more data. Asian, African and American countries tend to begin the day earlier than Europe (with exceptions, e.g. Germany). The table reflects that countries in southern Europe (e.g. Spain, Portugal or Greece) start the day later than the northern ones (the Netherlands or uk). For some countries, e.g. France, this information is known to be biased, as good morning (‘bonjour’) is used all along the day. A validation at a fine-grained scale is unfeasible, but the results at the country level are in line with Figure 3 of walch2016global, e.g., they state that Japan, the usa or Germany have earlier wake up times than Spain, Brazil or Turkey.
The average greeting times for good afternoon reveal insights that may stem from cultural differences (e.g. lunch break time). Anglo-Saxon and South Asian countries have the earliest afternoon (with averages between 13:00 and 14:00), while in Mediterranean countries the morning lasts longer (average greeting times for good afternoon around 15:00 or 16:00). A number of countries under the influence of the United Kingdom, such as the United States, Pakistan or India show earlier afternoon times. The opposite happens in South America, historically influenced by Portuguese and Spanish colonialism during the Early modern period, which exhibits later afternoon times.
This poses interesting questions for future work, such as whether there is a particular reason that could justify this behavior, like having more similar cuisine practices. In this context, the adoption of food practices in colonialism has been already studied by anthropologists and historians BIBREF4 . trigg2004food points out how in the early period of the Spanish colonialism in the Americas, they `civilized' the Indigenous community by making them adopt manners, dress and customs. She points that the role of food was specially relevant due to its large social component, and was not limited to the way the food was eaten, but also prepared, served and consumed.
Twitter also reflects differences between countries regarding night life. On the one hand, Anglo-Saxon countries wish good night earlier (from 19:49 in the uk to 21:10 in Canada) than other societies. On the other hand, southern European countries go to bed later, and some of them even wish a good night after midnight (e.g. Spain). Comparing to BIBREF5 , we find similar tendencies. For example, in their study Spain, Turkey or Brazil use the smartphone until later than Canada, the usa or the uk, and therefore they go later to bed. Our Twitter approach also captures the particular case of Japanese mentioned by BIBREF5 : they wake up very early, but use the smartphone until late in the night, suggesting a later bed time.
A fine-grained analysis shows how Twitter captures other cultural and working differences. Figure FIGREF8 charts the average day time for good morning for the usa, Brazil, Spain and India during part of the polling period. The time peaks in the weekends for many of the countries, showing that Twitter captures how business and work are reduced during holidays, resulting in later wake up times.
However, this is not visible in some countries where working conditions are sometimes questioned BIBREF6 : for India the weekend peak is less pronounced, which can be considered as an indicator that a significant part of its population does not enjoy work-free weekends.
The usage of part-of-day expressions can be helpful to understand more complex issues, such as how foreigners integrate into a country and adapt to its daily schedule. We take the usa as example, as it has a large foreign community of Spanish speakers, mainly from Mexico (and in a smaller proportion from other Latin American countries). If we calculate the average day time for the Spanish form of ‘good morning’ (‘buenos días’) in the usa, we obtain that the result is 08:09, while the corresponding English greeting's average time is 08:33. This is reinforced by Figure FIGREF10 , where ‘buenos días’ average day time is consistently lower than ‘good morning’. This would be in line to their presence in low-wage jobs that require to wake up earlier, e.g. waiter, cleaning or construction work BIBREF7 , BIBREF8 .
It is worth noting that, assuming that these ‘buenos días’ greetings come from latinos, those in the usa wake up even earlier than in their countries of origin (see Table TABREF7 ).
Figure FIGREF8 also shows how national holidays influence societies. For example, Nov. 2 (Day of the Dead) and Nov. 15 (Proclamation of the Republic) are holidays in Brazil, producing a peak in that country's graph similar to the behavior in the weekends. Similarly, Nov. 1 (All Saints' Day) and Dec. 6 (Constitution Day) are holidays in Spain and similar peaks are observed too. From Figure FIGREF10 we can see how Thanksgiving (Nov. 24 in 2016) reflects a four-day weekend in the usa: many businesses allow employees to take this holiday from Thursday, resulting into a gradual and increasing peak that spans until Sunday. This is captured by the English good mornings, but not by the Spanish ones. The day after the usa 2016 elections (Nov. 9), a valley occurs on the good morning time for the States (Figure FIGREF8 ). The winner was not known until 03:00, suggesting that the distribution of greetings reflects social behaviors in other special events.
Daily analysis
Twitter can be used to do a time-of-day analysis, e.g., as said in § SECREF6 , ‘bonjour’ is assumed to be used all along the day. To test this, we take Canada, where French and English are official languages. Figure FIGREF12 shows how ‘bonjour’ and ‘salut’ (‘hello’) are used all along the day, while ‘good morning’ is used in the morning hours. English and French hello's share a similar distribution.
Figure FIGREF13 shows a greeting area chart for the usa, showing how ‘good evening’ and ‘good afternoon’ are well differentiated, with the transition happening over 16:30. This contrasts to countries such as Spain (Figure FIGREF14 ), where the language has a single word (‘tarde’) for ‘evening’ and ‘afternoon’, whose greeting spans from over 14:00, as the morning ends late (see § SECREF1 ), to 21:00.
Area plots like these give a clear picture of the semantics of part-of-day nouns, as they depict the exact times when they are used. The precise semantics can be grounded more rigorously using statistical testing to know the exact time intervals at which people significantly use a specific greeting.
For example, to know when to switch from good morning to good afternoon in Spanish, we can: (1) group the number of ‘buenos días’ (‘good morning’) and ‘buenas tardes’ (‘good afternoon’) by intervals of 10 minutes, and (2) apply a binomial test to each interval, to determine if one of the greetings is significantly more likely to occur than the other (assuming equal probability of occurrence). For example, for Spain, we obtain that the morning ends at 14:00 (p-value= INLINEFORM0 at 14:00, 0.09 at 14:10) and the afternoon starts at 14:40 (p-value becomes statistically significant again with INLINEFORM1 , showing a significant majority of good afternoon).
Conclusion
We crawled Twitter to study the semantics of part-of-day nouns in different countries and societies, showed examples from the polled period and ratified them against existing research and dated events. For space reasons we cannot show insights for all scenarios, but full results are at https://github.com/aghie/peoples2018grounding.
Acknowledgments
DV and CGR receive funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01). | Unanswerable |
fd8b6723ad5f52770bec9009e45f860f4a8c4321 | fd8b6723ad5f52770bec9009e45f860f4a8c4321_0 | Q: What QA models were used?
Text: Introduction and Background
Information Extraction (IE), which refers to extracting structured information (i.e., relation tuples) from unstructured text, is the key problem in making use of large-scale texts. High quality extracted relation tuples can be used in various downstream applications such as Knowledge Base Population BIBREF0 , Knowledge Graph Acquisition BIBREF1 , and Natural Language Understanding. However, existing IE systems still cannot produce high-quality relation tuples to effectively support downstream applications.
Previous IE Systems
Most of previous IE systems can be divided into Relation Extraction (RE) based systems BIBREF2 , BIBREF3 and Open IE systems BIBREF4 , BIBREF5 , BIBREF6 .
Early work on RE decomposes the problem into Named Entity Recognition (NER) and relation classification. With the recent development of neural networks (NN), NN based NER models BIBREF7 , BIBREF8 and relation classification models BIBREF9 show better performance than previous handcrafted feature based methods. The recently proposed RE systems BIBREF10 , BIBREF11 try to jointly perform entity recognition and relation extraction to improve the performance. One limitation of existing RE benchmarks, e.g., NYT BIBREF12 , Wiki-KBP BIBREF13 and BioInfer BIBREF14 , is that they only involve 24, 19 and 94 relation types respectively comparing with thousands of relation types in knowledge bases such as DBpedia BIBREF15 , BIBREF16 . Besides, existing RE systems can only extract relation tuples from a single sentence while the cross-sentence information is ignored. Therefore, existing RE based systems are not powerful enough to support downstream applications in terms of performance or scalability.
On the other hand, early work on Open IE is mainly based on bootstrapping and pattern learning methods BIBREF17 . Recent work incorporates lexical features and sentence parsing results to automatically build a large number of pattern templates, based on which the systems can extract relation tuples from an input sentence BIBREF4 , BIBREF5 , BIBREF6 . An obvious weakness is that the extracted relations are formed by free texts which means they may be polysemous or synonymous and thus cannot be directly used without disambiguation and aggregation. The extracted free-text relations also bring extra manual evaluation cost, and how to automatically evaluate different Open IE systems fairly is an open problem. Stanovsky and Dagan BIBREF18 try to solve this problem by creating an Open IE benchmark with the help of QA-SRL annotations BIBREF19 . Nevertheless, the benchmark only involves 10K golden relation tuples. Hence, Open IE in its current form cannot provide a satisfactory solution to high-quality IE that supports downstream applications.
There are some recently proposed IE approaches which try to incorporate Question Answering (QA) techniques into IE. Levy et al. BIBREF20 propose to reduce the RE problem to answering simple reading comprehension questions. They build a question template for each relation type, and by asking questions with a relevant sentence and the first entity given, they can obtain relation triples from the sentence corresponding to the relation type and the first entity. Roth et al. BIBREF21 further improve the model performance on a similar problem setting. However, these approaches focus on sentence level relation argument extractions and do not provide a full-stack solution to general IE. In particular, they do not provide a solution to extract the first entity and its corresponding relation types before applying QA. Besides, sentence level relation extraction ignores the information across sentences such as coreference and inference between sentences, which greatly reduces the information extracted from the documents.
QA4IE Framework
To overcome the above weaknesses of existing IE systems, we propose a novel IE framework named QA4IE to perform document level general IE with the help of state-of-the-art approaches in Question Answering (QA) and Machine Reading Comprehension (MRC) area.
The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \lbrace e_i, r_{ij}, e_j\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation. We ignore the adverbials and only consider the entity pairs and their relations as in standard RE settings. Note that we process the entire document as a whole instead of processing individual sentences separately as in previous systems. As shown in Figure 1 , our QA4IE framework consists of four key steps:
Recognize all the candidate entities in the input document $D$ according to the knowledge base $K$ . These entities serve as the first entity $e_i$ in the relation triples $R$ .
For each candidate entity $e_i$ , discover the potential relations/properties as $r_{ij}$ from the knowledge base $K$ .
Given a candidate entity-relation or entity-property pair $\lbrace e_i, r_{ij}\rbrace $ as a query, find the corresponding entity or value $e_j$ in the input document $D$ using a QA system. The query here can be directly formed by the word sequence of $\lbrace e_i, r_{ij}\rbrace $ , or built from templates as in BIBREF20 .
Since the results of step 3 are formed by free texts in the input document $D$ , we need to link the results to the knowledge base $K$ .
This framework determines each of the three elements in relation triples step by step. Step 1 is equivalent to named entity recognition (NER), and state-of-the-art NER systems BIBREF22 , BIBREF8 can achieve over 0.91 F1-score on CoNLL'03 BIBREF23 , a well-known NER benchmark. For attribution discovery in step 2, we can take advantage of existing knowledge base ontologies such as Wikipedia Ontology to obtain a candidate relation/property list according to NER results in step 1. Besides, there is also some existing work on attribution discovery BIBREF24 , BIBREF25 and ontology construction BIBREF26 that can be used to solve the problem in step 2. The most difficult part in our framework is step 3 in which we need to find the entity (or value) $e_j$ in document $D$ according to the previous entity-relation (or entity-property) pair $\lbrace e_i, r_{ij}\rbrace $ . Inspired by recent success in QA and MRC BIBREF27 , BIBREF28 , BIBREF29 , we propose to solve step 3 in the setting of SQuAD BIBREF30 which is a very popular QA task. The problem setting of SQuAD is that given a document $\tilde{D}$ and a question $q$ , output a segment of text $a$ in $\tilde{D}$ as the answer to the question. In our framework, we assign the input document $D$ as $\tilde{D}$ and the entity-relation (or entity-property) pair $\lbrace e_i, r_{ij}\rbrace $ as $D$0 , and then we can get the answer $D$1 with a QA model. Finally in step 4, since the QA model can only produce answers formed by input free texts, we need to link the answer $D$2 to an entity $D$3 in the knowledge base $D$4 , and the entity $D$5 will form the target relation triple as $D$6 . Existing Entity Linking (EL) systems BIBREF31 , BIBREF32 directly solve this problem especially when we have high quality QA results from step 3.
As mentioned above, step 1, 2 and 4 in the QA4IE framework can be solved by existing work. Therefore, in this paper, we mainly focus on step 3. According to the recent progress in QA and MRC, deep neural networks are very good at solving this kind of problem with a large-scale dataset to train the network. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Inspired by WikiReading BIBREF33 , a recent large-scale QA benchmark over Wikipedia, we find that the articles in Wikipedia together with the high quality triples in knowledge bases such as Wikidata BIBREF34 and DBpedia can form the supervision we need. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.
Recent success on QA and MRC is mainly attributed to advanced deep learning architectures such as attention-based and memory-augmented neural networks BIBREF35 , BIBREF36 and the availability of large-scale datasets BIBREF37 , BIBREF38 especially SQuAD. The differences between step 3 and SQuAD can be summarized as follows. First, the answer to the question in SQuAD is restricted to a continuous segment of the input text, but in QA4IE, we remove this constraint which may reduce the number of target relation triples. Second, in existing QA and MRC benchmarks, the input documents are not very long and the questions may be complex and difficult to understand by the model, while in QA4IE, the input documents may be longer but the questions formed by entity-relation (or entity-property) pair are much simpler. Therefore, in our model, we incorporate Pointer Networks BIBREF39 to adapt to the answers formed by any words within the document in any order as well as Self-Matching Networks BIBREF29 to enhance the ability on modeling longer input documents.
Contributions
The contributions of this paper are as follows:
We propose a novel IE framework named QA4IE to overcome the weaknesses of existing IE systems. As we discussed above, the problem of step 1, 2 and 4 can be solved by existing work and we propose to solve the problem of step 3 with QA models.
To train a high quality neural network QA model, we build a large IE benchmark in QA style named QA4IE benchmark which consists of 293K Wikipedia articles and 2 million golden relation triples with 636 different relation types.
To adapt QA models to the IE problem, we propose an approach that enhances existing QA models with Pointer Networks and Self-Matching Networks.
We compare our model with IE baselines on our QA4IE benchmark and achieve a great improvement over previous baselines.
We open source our code and benchmark for repeatable experiments and further study of IE.
QA4IE Benchmark Construction
This section briefly presents the construction pipeline of QA4IE benchmark to solve the problem of step 3 as in our framework (Figure 1 ). Existing largest IE benchmark BIBREF18 is created with the help of QA-SRL annotations BIBREF19 which consists of 3.2K sentences and 10K golden extractions. Following this idea, we study recent large-scale QA and MRC datasets and find that WikiReading BIBREF33 creates a large-scale QA dataset based on Wikipedia articles and WikiData relation triples BIBREF34 . However, we observe about 11% of QA pairs with errors such as wrong answer locations or mismatch between answer string and answer words. Besides, there are over 50% of QA pairs with the answer involving words out of the input text or containing multiple answers. We consider these cases out of the problem scope of this paper and only focus on the information within the input text.
Therefore, we choose to build the benchmark referring the implementation of WikiReading based on Wikipedia articles and golden triples from Wikidata and DBpedia BIBREF15 , BIBREF16 . Specifically, we build our QA4IE benchmark in the following steps.
Dump and Preprocessing. We dump the English Wikipedia articles with Wikidata knowledge base and match each article with its corresponding relation triples according to its title. After cleaning data by removing low frequency tokens and special characters, we obtain over 4M articles and 18M triples with over 800 relation types.
Clipping. We discard the triples with multiple entities (or values) for $e_j$ (account for about 6%, e.g., a book may have multiple authors). Besides, we discard the triples with any word in $e_j$ out of the corresponding article (account for about 50%). After this step, we obtain about 3.5M articles and 9M triples with 636 relation types.
Incorporating DBpedia. Unlike WikiData, DBpedia is constructed automatically without human verification. Relations and properties in DBpedia are coarse and noisy. Thus we fix the existing 636 relation types in WikiData and build a projection from DBpedia relations to these 636 relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations. Then we gather all the DBpedia triples with the first entity is corresponding to one of the above 3.5M articles and the relation is one of the projected 148 relations. After the same clipping process as above and removing the repetitive triples, we obtain 394K additional triples in 302K existing Wikipedia articles.
Distillation. Since our benchmark is for IE, we prefer the articles with more golden triples involved by assuming that Wikipedia articles with more annotated triples are more informative and better annotated. Therefore, we figure out the distribution of the number of golden triples in articles and decide to discard the articles with less than 6 golden triples (account for about 80%). After this step, we obtain about 200K articles and 1.4M triples with 636 relation types.
Query and Answer Assignment. For each golden triple $\lbrace e_i, r_{ij}, e_j\rbrace $ , we assign the relation/property $r_{ij}$ as the query and the entity $e_j$ as the answer because the Wikipedia article and its corresponding golden triples are all about the same entity $e_i$ which is unnecessary in the queries. Besides, we find the location of each $e_j$ in the corresponding article as the answer location. As we discussed in Section 1, we do not restrict $e_j$ to a continuous segment in the article as required in SQuAD. Thus we first try to detect a matched span for each $e_j$ and assign this span as the answer location. Then for each of the rest $e_j$ which has no matched span, we search a matched sub-sequence in the article and assign the index sequence as the answer location. We name them span-triples and seq-triples respectively. Note that each triple will have an answer location because we have discarded the triples with unseen words in $e_j$ and if we can find multiple answer locations, all of them will be assigned as ground truths.
Dataset Splitting. For comparing the performance on span-triples and seq-triples, we set up two different datasets named QA4IE-SPAN and QA4IE-SEQ. In QA4IE-SPAN, only articles with all span-triples are involved, while in QA4IE-SEQ, articles with seq-triples are also involved. For studying the influence of the article length as longer articles are normally more difficult to model by LSTMs, we split the articles according to the article length. We name the set of articles with lengths shorter than 400 as S, lengths between 400 and 700 as M, lengths greater than 700 as L. Therefore we obtain 6 different datasets named QA4IE-SPAN-S/M/L and QA4IE-SEQ-S/M/L. A 5/1/5 splitting of train/dev/test sets is performed. The detailed statistics of QA4IE benchmark are provided in Table 1 .
We further compare our QA4IE benchmark with some existing IE and QA benchmarks in Table 2 . One can observe that QA4IE benchmark is much larger than previous IE and QA benchmarks except for WikiReading and Zero-Shot Benchmark. However, as we mentioned at the beginning of Section 2, WikiReading is problematic for IE settings. Besides, Zero-Shot Benchmark is a sentence-level dataset and we have described the disadvantage of ignoring information across sentences at Section 1.1. Thus to our best knowledge, QA4IE benchmark is the largest document level IE benchmark and it can be easily extended if we change our distillation strategy.
Question Answering Model
In this section, we describe our Question Answering model for IE. The model overview is illustrated in Figure 2 .
The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as
$$\begin{split} g_t &= {\rm sigmoid}(W_gx_t+b_g) \\ s_t &= {\rm relu } (W_xx_t+b_x) \\ u_t &= g_t \odot s_t + (1 - g_t) \odot x_t~. \end{split}$$ (Eq. 18)
Here $W_g, W_x \in \mathbb {R}^{d \times 2d}$ and $b_g, b_x \in \mathbb {R}^d$ are trainable weights, $u_t$ is a $d$ -dimension vector. The function relu is the rectified linear units BIBREF43 and $\odot $ is element-wise multiply over two vectors. The same Highway Layer is applied to $q_t$ and produces $v_t$ .
Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:
$$\begin{split} u_t^{^{\prime }} &= {\rm BiLSTM}(u^{^{\prime }}_{t-1},u_t) \\ v_t^{^{\prime }} &= {\rm BiLSTM}(v^{^{\prime }}_{t-1},v_t)~. \end{split}$$ (Eq. 19)
Here we obtain $\mathbf {U} = [u_1^{^{\prime }}, ... , u_n^{^{\prime }}] \in \mathbb {R}^{2d \times n}$ and $\mathbf {V} = [v_1^{^{\prime }}, ... , v_m^{^{\prime }}] \in \mathbb {R}^{2d \times m}$ . Then we feed $\mathbf {U}$ and $\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. We obtain the $8d$ -dimension query-aware context embedding vectors $h_1, ... , h_n$ as the result.
After modeling interactions between the input text and queries, we need to enhance the interactions within the input text words themselves especially for the longer text in IE settings. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as
$$\begin{split} o_t &= {\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\ s_j^t &= w^T {\rm tanh}(W_hh_j+\tilde{W_h}h_t)\\ \alpha _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\alpha _i^th_i ~. \end{split}$$ (Eq. 20)
Here $W_h, \tilde{W_h} \in \mathbb {R}^{d \times 8d}$ and $w \in \mathbb {R}^d$ are trainable weights, $[h, c]$ is vector concatenation across row. Besides, $\alpha _i^t$ is the attention weight from the $t^{th}$ word to the $i^{th}$ word and $c_t$ is the enhanced contextual embeddings over the $t^{th}$ word in the input text. We obtain the $2d$ -dimension query-aware and self-enhanced embeddings of input text after this step. Finally we feed the embeddings $\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as
$$\begin{split} p_t &= {\rm LSTM}(p_{t-1}, c_t) \\ s_j^t &= w^T {\rm tanh}(W_oo_j+W_pp_{t-1})\\ \beta _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\beta _i^to_i~. \end{split}$$ (Eq. 21)
The initial state of LSTM $p_0$ is $o_n$ . We can then model the probability of the $t^{th}$ token $a^t$ by
$$& {\rm P}(a^t | a^1, ... , a^{t-1}, \mathbf {O}) = (\beta _1^t, \beta _2^t, ... , \beta _n^t, \beta _{n+1}^t) \nonumber \\ & {\rm P}(a^t_i) \triangleq {\rm P}(a^t = i|a^1, ... , a^{t-1}, \mathbf {O}) = \beta _i^t ~.$$ (Eq. 22)
Here $\beta _{n+1}^t$ denotes the probability of generating the “ ${\rm eos}$ ” symbol since the decoder also needs to determine when to stop. Therefore, the probability of generating the answer sequence $\textbf {a}$ is as follows
$${\rm P}(\textbf {a}|\mathbf {O}) = \prod _t {\rm P}(a^t | a^1, ... , a^{t-1}, \mathbf {O})~.$$ (Eq. 23)
Given the supervision of answer sequence $\mathbf {y} = (y_1, ... , y_L)$ , we can write down the loss function of our model as
$${\rm L(\theta )} = -\sum _{t=1}^L \log {\rm P} (a^t_{y_t})~.$$ (Eq. 24)
To train our model, we minimize the loss function ${\rm L(\theta )}$ based on training examples.
Experimental Setup
We build our QA4IE benchmark following the steps described in Section 2. In experiments, we train and evaluate our QA models on the corresponding train and test sets while the hyper-parameters are tuned on dev sets. In order to make our experiments more informative, we also evaluate our model on SQuAD dataset BIBREF30 .
The preprocessing of our QA4IE benchmark and SQuAD dataset are all performed with the open source code from BIBREF27 . We use 100 1D filters with width 5 to construct the CharCNN in our char embedding layer. We set the hidden size $d=100$ for all the hidden states in our model. The optimizer we use is the AdaDelta optimizer BIBREF45 with an initial learning rate of 2. A dropout BIBREF46 rate of 0.2 is applied in all the CNN, LSTM and linear transformation layers in our model during training. For SQuAD dataset and our small sized QA4IE-SPAN/SEQ-S datasets, we set the max length of input texts as 400 and a mini-batch size of 20. For middle sized (and large sized) QA4IE datasets, we set the max length as 700 (800) and batch size as 7 (5). We introduce an early stopping in training process after 10 epochs. Our model is trained on a GTX 1080 Ti GPU and it takes about 14 hours on small sized QA4IE datasets. We implement our model with TensorFlow BIBREF47 and optimize the computational expensive LSTM layers with LSTMBlockFusedCell.
Results in QA Settings
We first perform experiments in QA settings to evaluate our QA model on both SQuAD dataset and QA4IE benchmark. Since our goal is to solve IE, not QA, the motivation of this part of experiments is to evaluate the performance of our model and make a comparison between QA4IE benchmark and existing datasets. Two metrics are introduced in the SQuAD dataset: Exact Match (EM) and F1-score. EM measures the percentage that the model prediction matches one of the ground truth answers exactly while F1-score measures the overlap between the prediction and ground truth answers. Our QA4IE benchmark also adopts these two metrics.
Table 3 presents the results of our QA model on SQuAD dataset. Our model outperforms the previous sequence model but is not competitive with span models because it is designed to produce sequence answers in IE settings while baseline span models are designed to produce span answers for SQuAD dataset.
The comparison between our QA model and two baseline QA models on our QA4IE benchmark is shown in Table 4 . For training of both baseline QA models, we use the same configuration of max input length as our model and tune the rest of hyper-parameters on dev sets. Our model outperforms these two baselines on all 6 datasets. The performance is good on S and M datasets but worse for longer documents. As we mentioned in Section 4.1, we set the max input length as 800 and ignore the rest words on L datasets. Actually, there are 11% of queries with no answers in the first 800 words in our benchmark. Processing longer documents is a tough problem BIBREF51 and we leave this to our future work.
To study the improvement of each component in our model, we present model ablation study results in Table 5 . We do not involve Attention Flow Layer and Pointer Network Decoder as they cannot be replaced by other architectures with the model still working. We can observe that the first three components can effectively improve the performance but Self Matching Layer makes the training more computationally expensive by 40%. Besides, the LSTMBlockFusedCell works effectively and accelerates the training process by 6 times without influencing the performance.
Results in IE Settings
In this subsection, we put our QA model in the entire pipeline of our QA4IE framework (Figure 1 ) and evaluate the framework in IE settings. Existing IE systems are all free-text based Open IE systems, so we need to manually evaluate the free-text based results in order to compare our model with the baselines. Therefore, we conduct experiments on a small dataset, the dev set of QA4IE-SPAN-S which consists of 4393 documents and 28501 ground truth queries.
Our QA4IE benchmark is based on Wikipedia articles and all the ground truth triples of each article have the same first entity (i.e. the title of the article). Thus, we can directly use the title of the article as the first entity of each triple without performing step 1 (entity recognition) in our framework. Besides, all the ground truth triples in our benchmark are from knowledge base where they are disambiguated and aggregated in the first place, and therefore step 4 (entity linking) is very simple and we do not evaluate it in our experiments.
A major difference between QA settings and IE settings is that in QA settings, each query corresponds to an answer, while in the QA4IE framework, the QA model take a candidate entity-relation (or entity-property) pair as the query and it needs to tell whether an answer to the query can be found in the input text. We can consider the IE settings here as performing step 2 and then step 3 in the QA4IE framework.
In step 2, we need to build a candidate query list for each article in the dataset. Instead of incorporating existing ontology or knowledge base, we use a simple but effective way to build the candidate query list of an article. Since we have a ground truth query list with labeled answers of each article, we can add all the neighboring queries of each ground truth query into the query list. The neighboring queries are defined as two queries that co-occur in the same ground truth query list of any articles in the dataset. We transform the dev set of QA4IE-SPAN-S above by adding neighboring queries into the query list. After this step, the number of queries grows to 426336, and only 28501 of them are ground truth queries labeled with an answer.
In step 3, we require our QA model to output a confidence score along with the answer to each candidate query. Our QA model produces no answer to a query when the confidence score is less than a threshold $\delta $ or the output is an “ ${\rm eos}$ ” symbol. For the answers with a confidence score $\ge \delta $ , we evaluate them by the EM measurement with ground truth answers and count the true positive samples in order to calculate the precision and recall under the threshold $\delta $ . Specifically, we try two confidence scores calculated as follows:
$$\begin{split} {\rm Score_{mul}} = \prod _{t=1}^L{\rm P}(a^t_{i_t}),~~~{\rm Score_{avg}} = \sum _{t=1}^L{\rm P}(a^t_{i_t}) / L ~, \end{split}$$ (Eq. 34)
where $(a^1_{i_1}, ... , a^L_{i_L})$ is the answer sequence and ${\rm P}(a^t_i)$ is defined in Eq. ( 22 ). ${\rm Score_{mul}}$ is equivalent to the training loss in Eq. ( 24 ) and ${\rm Score_{avg}}$ takes the answer length into account.
The precision-recall curves of our framework based on the two confidence scores are plotted in Figure 3 . We can observe that the EM rate we achieve in QA settings is actually the best recall (91.87) in this curve (by setting $\delta = 0$ ). The best F1-scores of the two curves are 29.97 (precision $= 21.61$ , recall $= 48.85$ , $\delta = 0.91$ ) for ${\rm Score_{mul}}$ and 31.05 (precision $= 23.93$ , recall $= 44.21$ , $\delta = 0.97$ ) for ${\rm Score_{avg}}$ . ${\rm Score_{avg}}$ is better than $= 21.61$0 , which suggests that the answer length should be taken into account.
We then evaluate existing IE systems on the dev set of QA4IE-SPAN-S and empirically compare them with our framework. Note that while BIBREF20 is closely related to our work, we cannot fairly compare our framework with BIBREF20 because their systems are in the sentence level and require additional negative samples for training. BIBREF21 is also related to our work, but their dataset and code have not been published yet. Therefore, we choose to evaluate three popular Open IE systems, Open IE 4 BIBREF6 , Stanford IE BIBREF4 and ClauseIE BIBREF5 .
Since Open IE systems take a single sentence as input and output a set of free-text based triples, we need to find the sentences involving ground truth answers and feed the sentences into the Open IE systems. In the dev set of QA4IE-SPAN-S, there are 28501 queries with 44449 answer locations labeled in the 4393 documents. By feeding the 44449 sentences into the Open IE systems, we obtain a set of extracted triples from each sentence. We calculate the number of true positive samples by first filtering out triples with less than 20% words overlapping with ground truth answers and then asking two human annotators to verify the remaining triples independently. Since in the experiments, our framework is given the ground-truth first entity of each triple (the title of the corresponding Wikipedia article) while the baseline systems do not have this information, we ask our human annotators to ignore the mistakes on the first entities when evaluating triples produced by the baseline systems to offset this disadvantage. For example, the 3rd case of ClauseIE and the 4th case of Open IE 4 in Table 7 are all labeled as correct by our annotators even though the first entities are pronouns. The two human annotators reached an agreement on 191 out of 195 randomly selected cases.
The evaluation results of the three Open IE baselines are shown in Table 6 . We can observe that most of the extracted triples are not related to ground truths and the precision and recall are all very low (around 1%) although we have already helped the baseline systems locate the sentences containing ground truth answers.
Case Study
In this subsection, we perform case studies of IE settings in Table 7 to better understand the models and benchmarks. The baseline Open IE systems produce triples by analyzing the subjects, predicates and objects in input sentences, and thus our annotators lower the bar of accepting triples. However, the analysis on semantic roles and parsing trees cannot work very well on complicated input sentences like the 2nd and the 3rd cases. Besides, the baseline systems can hardly solve the last two cases which require inference on input sentences.
Our framework works very well on this dataset with the QA measurements EM $= 91.87$ and F1 $= 93.53$ and the IE measurements can be found in Figure 3 . Most of the error cases are the fourth case which is acceptable by human annotators. Note that our framework takes the whole document as the input while the baseline systems take the individual sentence as the input, which means the experiment setting is much more difficult for our framework.
Human Evaluation on QA4IE Benchmark
Finally, we perform a human evaluation on our QA4IE benchmark to verify the reliability of former experiments. The evaluation metrics are as follows:
Triple Accuracy is to check whether each ground truth triple is accurate (one cannot find conflicts between the ground truth triple and the corresponding article) because the ground truth triples from WikiData and DBpedia may be incorrect or incomplete.
Contextual Consistency is to check whether the context of each answer location is consistent with the corresponding ground truth triple (one can infer from the context to obtain the ground truth triple) because we keep all matched answer locations as ground truths but some of them may be irrelevant with the corresponding triple.
Triple Consistency is to check whether there is at least one answer location that is contextually consistent for each ground truth triple. It can be calculated by counting the results of Contextual Consistency.
We randomly sample 25 articles respectively from the 6 datasets (in total of 1002 ground truth triples with 2691 labeled answer locations) and let two human annotators label the Triple Accuracy for each ground truth triple and the Contextual Consistency for each answer location. The two human annotators reached an agreement on 131 of 132 randomly selected Triple Accuracy cases and on 229 of 234 randomly selected Contextual Consistency cases. The human evaluation results are shown in Table 8 . We can find that the Triple Accuracy and the Triple Consistency is acceptable while the Contextual Consistency still needs to be improved. The Contextual Consistency problem is a weakness of distant supervision, and we leave this to our future work.
Conclusion
In this paper, we propose a novel QA based IE framework named QA4IE to address the weaknesses of previous IE solutions. In our framework (Figure 1 ), we divide the complicated IE problem into four steps and show that the step 1, 2 and 4 can be solved well enough by existing work. For the most difficult step 3, we transform it to a QA problem and solve it with our QA model. To train this QA model, we construct a large IE benchmark named QA4IE benchmark that consists of 293K documents and 2 million golden relation triples with 636 different relation types. To our best knowledge, our QA4IE benchmark is the largest document level IE benchmark. We compare our system with existing best IE baseline systems on our QA4IE benchmark and the results show that our system achieves a great improvement over baseline systems.
For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper. Besides, processing longer documents and improving the quality of our benchmark are all challenging problems as we mentioned previously. We hope this work can provide new thoughts for the area of information extraction.
Acknowledgements
W. Zhang is the corresponding author of this paper. The work done by SJTU is sponsored by National Natural Science Foundation of China (61632017, 61702327, 61772333) and Shanghai Sailing Program (17YF1428200). | A pointer network decodes the answer from a bidirectional LSTM with attention flow layer and self-matching layer, whose inputs come from word and character embeddings of the query and input text fed through a highway layer. |
4ce3a6632e7d86d29a42bd1fcf325114b3c11d46 | 4ce3a6632e7d86d29a42bd1fcf325114b3c11d46_0 | Q: Can this approach model n-ary relations?
Text: Introduction and Background
Information Extraction (IE), which refers to extracting structured information (i.e., relation tuples) from unstructured text, is the key problem in making use of large-scale texts. High quality extracted relation tuples can be used in various downstream applications such as Knowledge Base Population BIBREF0 , Knowledge Graph Acquisition BIBREF1 , and Natural Language Understanding. However, existing IE systems still cannot produce high-quality relation tuples to effectively support downstream applications.
Previous IE Systems
Most of previous IE systems can be divided into Relation Extraction (RE) based systems BIBREF2 , BIBREF3 and Open IE systems BIBREF4 , BIBREF5 , BIBREF6 .
Early work on RE decomposes the problem into Named Entity Recognition (NER) and relation classification. With the recent development of neural networks (NN), NN based NER models BIBREF7 , BIBREF8 and relation classification models BIBREF9 show better performance than previous handcrafted feature based methods. The recently proposed RE systems BIBREF10 , BIBREF11 try to jointly perform entity recognition and relation extraction to improve the performance. One limitation of existing RE benchmarks, e.g., NYT BIBREF12 , Wiki-KBP BIBREF13 and BioInfer BIBREF14 , is that they only involve 24, 19 and 94 relation types respectively comparing with thousands of relation types in knowledge bases such as DBpedia BIBREF15 , BIBREF16 . Besides, existing RE systems can only extract relation tuples from a single sentence while the cross-sentence information is ignored. Therefore, existing RE based systems are not powerful enough to support downstream applications in terms of performance or scalability.
On the other hand, early work on Open IE is mainly based on bootstrapping and pattern learning methods BIBREF17 . Recent work incorporates lexical features and sentence parsing results to automatically build a large number of pattern templates, based on which the systems can extract relation tuples from an input sentence BIBREF4 , BIBREF5 , BIBREF6 . An obvious weakness is that the extracted relations are formed by free texts which means they may be polysemous or synonymous and thus cannot be directly used without disambiguation and aggregation. The extracted free-text relations also bring extra manual evaluation cost, and how to automatically evaluate different Open IE systems fairly is an open problem. Stanovsky and Dagan BIBREF18 try to solve this problem by creating an Open IE benchmark with the help of QA-SRL annotations BIBREF19 . Nevertheless, the benchmark only involves 10K golden relation tuples. Hence, Open IE in its current form cannot provide a satisfactory solution to high-quality IE that supports downstream applications.
There are some recently proposed IE approaches which try to incorporate Question Answering (QA) techniques into IE. Levy et al. BIBREF20 propose to reduce the RE problem to answering simple reading comprehension questions. They build a question template for each relation type, and by asking questions with a relevant sentence and the first entity given, they can obtain relation triples from the sentence corresponding to the relation type and the first entity. Roth et al. BIBREF21 further improve the model performance on a similar problem setting. However, these approaches focus on sentence level relation argument extractions and do not provide a full-stack solution to general IE. In particular, they do not provide a solution to extract the first entity and its corresponding relation types before applying QA. Besides, sentence level relation extraction ignores the information across sentences such as coreference and inference between sentences, which greatly reduces the information extracted from the documents.
QA4IE Framework
To overcome the above weaknesses of existing IE systems, we propose a novel IE framework named QA4IE to perform document level general IE with the help of state-of-the-art approaches in Question Answering (QA) and Machine Reading Comprehension (MRC) area.
The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \lbrace e_i, r_{ij}, e_j\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation. We ignore the adverbials and only consider the entity pairs and their relations as in standard RE settings. Note that we process the entire document as a whole instead of processing individual sentences separately as in previous systems. As shown in Figure 1 , our QA4IE framework consists of four key steps:
Recognize all the candidate entities in the input document $D$ according to the knowledge base $K$ . These entities serve as the first entity $e_i$ in the relation triples $R$ .
For each candidate entity $e_i$ , discover the potential relations/properties as $r_{ij}$ from the knowledge base $K$ .
Given a candidate entity-relation or entity-property pair $\lbrace e_i, r_{ij}\rbrace $ as a query, find the corresponding entity or value $e_j$ in the input document $D$ using a QA system. The query here can be directly formed by the word sequence of $\lbrace e_i, r_{ij}\rbrace $ , or built from templates as in BIBREF20 .
Since the results of step 3 are formed by free texts in the input document $D$ , we need to link the results to the knowledge base $K$ .
This framework determines each of the three elements in relation triples step by step. Step 1 is equivalent to named entity recognition (NER), and state-of-the-art NER systems BIBREF22 , BIBREF8 can achieve over 0.91 F1-score on CoNLL'03 BIBREF23 , a well-known NER benchmark. For attribution discovery in step 2, we can take advantage of existing knowledge base ontologies such as Wikipedia Ontology to obtain a candidate relation/property list according to NER results in step 1. Besides, there is also some existing work on attribution discovery BIBREF24 , BIBREF25 and ontology construction BIBREF26 that can be used to solve the problem in step 2. The most difficult part in our framework is step 3 in which we need to find the entity (or value) $e_j$ in document $D$ according to the previous entity-relation (or entity-property) pair $\lbrace e_i, r_{ij}\rbrace $ . Inspired by recent success in QA and MRC BIBREF27 , BIBREF28 , BIBREF29 , we propose to solve step 3 in the setting of SQuAD BIBREF30 which is a very popular QA task. The problem setting of SQuAD is that given a document $\tilde{D}$ and a question $q$ , output a segment of text $a$ in $\tilde{D}$ as the answer to the question. In our framework, we assign the input document $D$ as $\tilde{D}$ and the entity-relation (or entity-property) pair $\lbrace e_i, r_{ij}\rbrace $ as $D$0 , and then we can get the answer $D$1 with a QA model. Finally in step 4, since the QA model can only produce answers formed by input free texts, we need to link the answer $D$2 to an entity $D$3 in the knowledge base $D$4 , and the entity $D$5 will form the target relation triple as $D$6 . Existing Entity Linking (EL) systems BIBREF31 , BIBREF32 directly solve this problem especially when we have high quality QA results from step 3.
As mentioned above, step 1, 2 and 4 in the QA4IE framework can be solved by existing work. Therefore, in this paper, we mainly focus on step 3. According to the recent progress in QA and MRC, deep neural networks are very good at solving this kind of problem with a large-scale dataset to train the network. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Inspired by WikiReading BIBREF33 , a recent large-scale QA benchmark over Wikipedia, we find that the articles in Wikipedia together with the high quality triples in knowledge bases such as Wikidata BIBREF34 and DBpedia can form the supervision we need. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.
Recent success on QA and MRC is mainly attributed to advanced deep learning architectures such as attention-based and memory-augmented neural networks BIBREF35 , BIBREF36 and the availability of large-scale datasets BIBREF37 , BIBREF38 especially SQuAD. The differences between step 3 and SQuAD can be summarized as follows. First, the answer to the question in SQuAD is restricted to a continuous segment of the input text, but in QA4IE, we remove this constraint which may reduce the number of target relation triples. Second, in existing QA and MRC benchmarks, the input documents are not very long and the questions may be complex and difficult to understand by the model, while in QA4IE, the input documents may be longer but the questions formed by entity-relation (or entity-property) pair are much simpler. Therefore, in our model, we incorporate Pointer Networks BIBREF39 to adapt to the answers formed by any words within the document in any order as well as Self-Matching Networks BIBREF29 to enhance the ability on modeling longer input documents.
Contributions
The contributions of this paper are as follows:
We propose a novel IE framework named QA4IE to overcome the weaknesses of existing IE systems. As we discussed above, the problem of step 1, 2 and 4 can be solved by existing work and we propose to solve the problem of step 3 with QA models.
To train a high quality neural network QA model, we build a large IE benchmark in QA style named QA4IE benchmark which consists of 293K Wikipedia articles and 2 million golden relation triples with 636 different relation types.
To adapt QA models to the IE problem, we propose an approach that enhances existing QA models with Pointer Networks and Self-Matching Networks.
We compare our model with IE baselines on our QA4IE benchmark and achieve a great improvement over previous baselines.
We open source our code and benchmark for repeatable experiments and further study of IE.
QA4IE Benchmark Construction
This section briefly presents the construction pipeline of QA4IE benchmark to solve the problem of step 3 as in our framework (Figure 1 ). Existing largest IE benchmark BIBREF18 is created with the help of QA-SRL annotations BIBREF19 which consists of 3.2K sentences and 10K golden extractions. Following this idea, we study recent large-scale QA and MRC datasets and find that WikiReading BIBREF33 creates a large-scale QA dataset based on Wikipedia articles and WikiData relation triples BIBREF34 . However, we observe about 11% of QA pairs with errors such as wrong answer locations or mismatch between answer string and answer words. Besides, there are over 50% of QA pairs with the answer involving words out of the input text or containing multiple answers. We consider these cases out of the problem scope of this paper and only focus on the information within the input text.
Therefore, we choose to build the benchmark referring the implementation of WikiReading based on Wikipedia articles and golden triples from Wikidata and DBpedia BIBREF15 , BIBREF16 . Specifically, we build our QA4IE benchmark in the following steps.
Dump and Preprocessing. We dump the English Wikipedia articles with Wikidata knowledge base and match each article with its corresponding relation triples according to its title. After cleaning data by removing low frequency tokens and special characters, we obtain over 4M articles and 18M triples with over 800 relation types.
Clipping. We discard the triples with multiple entities (or values) for $e_j$ (account for about 6%, e.g., a book may have multiple authors). Besides, we discard the triples with any word in $e_j$ out of the corresponding article (account for about 50%). After this step, we obtain about 3.5M articles and 9M triples with 636 relation types.
Incorporating DBpedia. Unlike WikiData, DBpedia is constructed automatically without human verification. Relations and properties in DBpedia are coarse and noisy. Thus we fix the existing 636 relation types in WikiData and build a projection from DBpedia relations to these 636 relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations. Then we gather all the DBpedia triples with the first entity is corresponding to one of the above 3.5M articles and the relation is one of the projected 148 relations. After the same clipping process as above and removing the repetitive triples, we obtain 394K additional triples in 302K existing Wikipedia articles.
Distillation. Since our benchmark is for IE, we prefer the articles with more golden triples involved by assuming that Wikipedia articles with more annotated triples are more informative and better annotated. Therefore, we figure out the distribution of the number of golden triples in articles and decide to discard the articles with less than 6 golden triples (account for about 80%). After this step, we obtain about 200K articles and 1.4M triples with 636 relation types.
Query and Answer Assignment. For each golden triple $\lbrace e_i, r_{ij}, e_j\rbrace $ , we assign the relation/property $r_{ij}$ as the query and the entity $e_j$ as the answer because the Wikipedia article and its corresponding golden triples are all about the same entity $e_i$ which is unnecessary in the queries. Besides, we find the location of each $e_j$ in the corresponding article as the answer location. As we discussed in Section 1, we do not restrict $e_j$ to a continuous segment in the article as required in SQuAD. Thus we first try to detect a matched span for each $e_j$ and assign this span as the answer location. Then for each of the rest $e_j$ which has no matched span, we search a matched sub-sequence in the article and assign the index sequence as the answer location. We name them span-triples and seq-triples respectively. Note that each triple will have an answer location because we have discarded the triples with unseen words in $e_j$ and if we can find multiple answer locations, all of them will be assigned as ground truths.
Dataset Splitting. For comparing the performance on span-triples and seq-triples, we set up two different datasets named QA4IE-SPAN and QA4IE-SEQ. In QA4IE-SPAN, only articles with all span-triples are involved, while in QA4IE-SEQ, articles with seq-triples are also involved. For studying the influence of the article length as longer articles are normally more difficult to model by LSTMs, we split the articles according to the article length. We name the set of articles with lengths shorter than 400 as S, lengths between 400 and 700 as M, lengths greater than 700 as L. Therefore we obtain 6 different datasets named QA4IE-SPAN-S/M/L and QA4IE-SEQ-S/M/L. A 5/1/5 splitting of train/dev/test sets is performed. The detailed statistics of QA4IE benchmark are provided in Table 1 .
We further compare our QA4IE benchmark with some existing IE and QA benchmarks in Table 2 . One can observe that QA4IE benchmark is much larger than previous IE and QA benchmarks except for WikiReading and Zero-Shot Benchmark. However, as we mentioned at the beginning of Section 2, WikiReading is problematic for IE settings. Besides, Zero-Shot Benchmark is a sentence-level dataset and we have described the disadvantage of ignoring information across sentences at Section 1.1. Thus to our best knowledge, QA4IE benchmark is the largest document level IE benchmark and it can be easily extended if we change our distillation strategy.
Question Answering Model
In this section, we describe our Question Answering model for IE. The model overview is illustrated in Figure 2 .
The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as
$$\begin{split} g_t &= {\rm sigmoid}(W_gx_t+b_g) \\ s_t &= {\rm relu } (W_xx_t+b_x) \\ u_t &= g_t \odot s_t + (1 - g_t) \odot x_t~. \end{split}$$ (Eq. 18)
Here $W_g, W_x \in \mathbb {R}^{d \times 2d}$ and $b_g, b_x \in \mathbb {R}^d$ are trainable weights, $u_t$ is a $d$ -dimension vector. The function relu is the rectified linear units BIBREF43 and $\odot $ is element-wise multiply over two vectors. The same Highway Layer is applied to $q_t$ and produces $v_t$ .
Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:
$$\begin{split} u_t^{^{\prime }} &= {\rm BiLSTM}(u^{^{\prime }}_{t-1},u_t) \\ v_t^{^{\prime }} &= {\rm BiLSTM}(v^{^{\prime }}_{t-1},v_t)~. \end{split}$$ (Eq. 19)
Here we obtain $\mathbf {U} = [u_1^{^{\prime }}, ... , u_n^{^{\prime }}] \in \mathbb {R}^{2d \times n}$ and $\mathbf {V} = [v_1^{^{\prime }}, ... , v_m^{^{\prime }}] \in \mathbb {R}^{2d \times m}$ . Then we feed $\mathbf {U}$ and $\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. We obtain the $8d$ -dimension query-aware context embedding vectors $h_1, ... , h_n$ as the result.
After modeling interactions between the input text and queries, we need to enhance the interactions within the input text words themselves especially for the longer text in IE settings. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as
$$\begin{split} o_t &= {\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\ s_j^t &= w^T {\rm tanh}(W_hh_j+\tilde{W_h}h_t)\\ \alpha _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\alpha _i^th_i ~. \end{split}$$ (Eq. 20)
Here $W_h, \tilde{W_h} \in \mathbb {R}^{d \times 8d}$ and $w \in \mathbb {R}^d$ are trainable weights, $[h, c]$ is vector concatenation across row. Besides, $\alpha _i^t$ is the attention weight from the $t^{th}$ word to the $i^{th}$ word and $c_t$ is the enhanced contextual embeddings over the $t^{th}$ word in the input text. We obtain the $2d$ -dimension query-aware and self-enhanced embeddings of input text after this step. Finally we feed the embeddings $\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as
$$\begin{split} p_t &= {\rm LSTM}(p_{t-1}, c_t) \\ s_j^t &= w^T {\rm tanh}(W_oo_j+W_pp_{t-1})\\ \beta _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\beta _i^to_i~. \end{split}$$ (Eq. 21)
The initial state of LSTM $p_0$ is $o_n$ . We can then model the probability of the $t^{th}$ token $a^t$ by
$$& {\rm P}(a^t | a^1, ... , a^{t-1}, \mathbf {O}) = (\beta _1^t, \beta _2^t, ... , \beta _n^t, \beta _{n+1}^t) \nonumber \\ & {\rm P}(a^t_i) \triangleq {\rm P}(a^t = i|a^1, ... , a^{t-1}, \mathbf {O}) = \beta _i^t ~.$$ (Eq. 22)
Here $\beta _{n+1}^t$ denotes the probability of generating the “ ${\rm eos}$ ” symbol since the decoder also needs to determine when to stop. Therefore, the probability of generating the answer sequence $\textbf {a}$ is as follows
$${\rm P}(\textbf {a}|\mathbf {O}) = \prod _t {\rm P}(a^t | a^1, ... , a^{t-1}, \mathbf {O})~.$$ (Eq. 23)
Given the supervision of answer sequence $\mathbf {y} = (y_1, ... , y_L)$ , we can write down the loss function of our model as
$${\rm L(\theta )} = -\sum _{t=1}^L \log {\rm P} (a^t_{y_t})~.$$ (Eq. 24)
To train our model, we minimize the loss function ${\rm L(\theta )}$ based on training examples.
Experimental Setup
We build our QA4IE benchmark following the steps described in Section 2. In experiments, we train and evaluate our QA models on the corresponding train and test sets while the hyper-parameters are tuned on dev sets. In order to make our experiments more informative, we also evaluate our model on SQuAD dataset BIBREF30 .
The preprocessing of our QA4IE benchmark and SQuAD dataset are all performed with the open source code from BIBREF27 . We use 100 1D filters with width 5 to construct the CharCNN in our char embedding layer. We set the hidden size $d=100$ for all the hidden states in our model. The optimizer we use is the AdaDelta optimizer BIBREF45 with an initial learning rate of 2. A dropout BIBREF46 rate of 0.2 is applied in all the CNN, LSTM and linear transformation layers in our model during training. For SQuAD dataset and our small sized QA4IE-SPAN/SEQ-S datasets, we set the max length of input texts as 400 and a mini-batch size of 20. For middle sized (and large sized) QA4IE datasets, we set the max length as 700 (800) and batch size as 7 (5). We introduce an early stopping in training process after 10 epochs. Our model is trained on a GTX 1080 Ti GPU and it takes about 14 hours on small sized QA4IE datasets. We implement our model with TensorFlow BIBREF47 and optimize the computational expensive LSTM layers with LSTMBlockFusedCell.
Results in QA Settings
We first perform experiments in QA settings to evaluate our QA model on both SQuAD dataset and QA4IE benchmark. Since our goal is to solve IE, not QA, the motivation of this part of experiments is to evaluate the performance of our model and make a comparison between QA4IE benchmark and existing datasets. Two metrics are introduced in the SQuAD dataset: Exact Match (EM) and F1-score. EM measures the percentage that the model prediction matches one of the ground truth answers exactly while F1-score measures the overlap between the prediction and ground truth answers. Our QA4IE benchmark also adopts these two metrics.
Table 3 presents the results of our QA model on SQuAD dataset. Our model outperforms the previous sequence model but is not competitive with span models because it is designed to produce sequence answers in IE settings while baseline span models are designed to produce span answers for SQuAD dataset.
The comparison between our QA model and two baseline QA models on our QA4IE benchmark is shown in Table 4 . For training of both baseline QA models, we use the same configuration of max input length as our model and tune the rest of hyper-parameters on dev sets. Our model outperforms these two baselines on all 6 datasets. The performance is good on S and M datasets but worse for longer documents. As we mentioned in Section 4.1, we set the max input length as 800 and ignore the rest words on L datasets. Actually, there are 11% of queries with no answers in the first 800 words in our benchmark. Processing longer documents is a tough problem BIBREF51 and we leave this to our future work.
To study the improvement of each component in our model, we present model ablation study results in Table 5 . We do not involve Attention Flow Layer and Pointer Network Decoder as they cannot be replaced by other architectures with the model still working. We can observe that the first three components can effectively improve the performance but Self Matching Layer makes the training more computationally expensive by 40%. Besides, the LSTMBlockFusedCell works effectively and accelerates the training process by 6 times without influencing the performance.
Results in IE Settings
In this subsection, we put our QA model in the entire pipeline of our QA4IE framework (Figure 1 ) and evaluate the framework in IE settings. Existing IE systems are all free-text based Open IE systems, so we need to manually evaluate the free-text based results in order to compare our model with the baselines. Therefore, we conduct experiments on a small dataset, the dev set of QA4IE-SPAN-S which consists of 4393 documents and 28501 ground truth queries.
Our QA4IE benchmark is based on Wikipedia articles and all the ground truth triples of each article have the same first entity (i.e. the title of the article). Thus, we can directly use the title of the article as the first entity of each triple without performing step 1 (entity recognition) in our framework. Besides, all the ground truth triples in our benchmark are from knowledge base where they are disambiguated and aggregated in the first place, and therefore step 4 (entity linking) is very simple and we do not evaluate it in our experiments.
A major difference between QA settings and IE settings is that in QA settings, each query corresponds to an answer, while in the QA4IE framework, the QA model take a candidate entity-relation (or entity-property) pair as the query and it needs to tell whether an answer to the query can be found in the input text. We can consider the IE settings here as performing step 2 and then step 3 in the QA4IE framework.
In step 2, we need to build a candidate query list for each article in the dataset. Instead of incorporating existing ontology or knowledge base, we use a simple but effective way to build the candidate query list of an article. Since we have a ground truth query list with labeled answers of each article, we can add all the neighboring queries of each ground truth query into the query list. The neighboring queries are defined as two queries that co-occur in the same ground truth query list of any articles in the dataset. We transform the dev set of QA4IE-SPAN-S above by adding neighboring queries into the query list. After this step, the number of queries grows to 426336, and only 28501 of them are ground truth queries labeled with an answer.
In step 3, we require our QA model to output a confidence score along with the answer to each candidate query. Our QA model produces no answer to a query when the confidence score is less than a threshold $\delta $ or the output is an “ ${\rm eos}$ ” symbol. For the answers with a confidence score $\ge \delta $ , we evaluate them by the EM measurement with ground truth answers and count the true positive samples in order to calculate the precision and recall under the threshold $\delta $ . Specifically, we try two confidence scores calculated as follows:
$$\begin{split} {\rm Score_{mul}} = \prod _{t=1}^L{\rm P}(a^t_{i_t}),~~~{\rm Score_{avg}} = \sum _{t=1}^L{\rm P}(a^t_{i_t}) / L ~, \end{split}$$ (Eq. 34)
where $(a^1_{i_1}, ... , a^L_{i_L})$ is the answer sequence and ${\rm P}(a^t_i)$ is defined in Eq. ( 22 ). ${\rm Score_{mul}}$ is equivalent to the training loss in Eq. ( 24 ) and ${\rm Score_{avg}}$ takes the answer length into account.
The precision-recall curves of our framework based on the two confidence scores are plotted in Figure 3 . We can observe that the EM rate we achieve in QA settings is actually the best recall (91.87) in this curve (by setting $\delta = 0$ ). The best F1-scores of the two curves are 29.97 (precision $= 21.61$ , recall $= 48.85$ , $\delta = 0.91$ ) for ${\rm Score_{mul}}$ and 31.05 (precision $= 23.93$ , recall $= 44.21$ , $\delta = 0.97$ ) for ${\rm Score_{avg}}$ . ${\rm Score_{avg}}$ is better than $= 21.61$0 , which suggests that the answer length should be taken into account.
We then evaluate existing IE systems on the dev set of QA4IE-SPAN-S and empirically compare them with our framework. Note that while BIBREF20 is closely related to our work, we cannot fairly compare our framework with BIBREF20 because their systems are in the sentence level and require additional negative samples for training. BIBREF21 is also related to our work, but their dataset and code have not been published yet. Therefore, we choose to evaluate three popular Open IE systems, Open IE 4 BIBREF6 , Stanford IE BIBREF4 and ClauseIE BIBREF5 .
Since Open IE systems take a single sentence as input and output a set of free-text based triples, we need to find the sentences involving ground truth answers and feed the sentences into the Open IE systems. In the dev set of QA4IE-SPAN-S, there are 28501 queries with 44449 answer locations labeled in the 4393 documents. By feeding the 44449 sentences into the Open IE systems, we obtain a set of extracted triples from each sentence. We calculate the number of true positive samples by first filtering out triples with less than 20% words overlapping with ground truth answers and then asking two human annotators to verify the remaining triples independently. Since in the experiments, our framework is given the ground-truth first entity of each triple (the title of the corresponding Wikipedia article) while the baseline systems do not have this information, we ask our human annotators to ignore the mistakes on the first entities when evaluating triples produced by the baseline systems to offset this disadvantage. For example, the 3rd case of ClauseIE and the 4th case of Open IE 4 in Table 7 are all labeled as correct by our annotators even though the first entities are pronouns. The two human annotators reached an agreement on 191 out of 195 randomly selected cases.
The evaluation results of the three Open IE baselines are shown in Table 6 . We can observe that most of the extracted triples are not related to ground truths and the precision and recall are all very low (around 1%) although we have already helped the baseline systems locate the sentences containing ground truth answers.
Case Study
In this subsection, we perform case studies of IE settings in Table 7 to better understand the models and benchmarks. The baseline Open IE systems produce triples by analyzing the subjects, predicates and objects in input sentences, and thus our annotators lower the bar of accepting triples. However, the analysis on semantic roles and parsing trees cannot work very well on complicated input sentences like the 2nd and the 3rd cases. Besides, the baseline systems can hardly solve the last two cases which require inference on input sentences.
Our framework works very well on this dataset with the QA measurements EM $= 91.87$ and F1 $= 93.53$ and the IE measurements can be found in Figure 3 . Most of the error cases are the fourth case which is acceptable by human annotators. Note that our framework takes the whole document as the input while the baseline systems take the individual sentence as the input, which means the experiment setting is much more difficult for our framework.
Human Evaluation on QA4IE Benchmark
Finally, we perform a human evaluation on our QA4IE benchmark to verify the reliability of former experiments. The evaluation metrics are as follows:
Triple Accuracy is to check whether each ground truth triple is accurate (one cannot find conflicts between the ground truth triple and the corresponding article) because the ground truth triples from WikiData and DBpedia may be incorrect or incomplete.
Contextual Consistency is to check whether the context of each answer location is consistent with the corresponding ground truth triple (one can infer from the context to obtain the ground truth triple) because we keep all matched answer locations as ground truths but some of them may be irrelevant with the corresponding triple.
Triple Consistency is to check whether there is at least one answer location that is contextually consistent for each ground truth triple. It can be calculated by counting the results of Contextual Consistency.
We randomly sample 25 articles respectively from the 6 datasets (in total of 1002 ground truth triples with 2691 labeled answer locations) and let two human annotators label the Triple Accuracy for each ground truth triple and the Contextual Consistency for each answer location. The two human annotators reached an agreement on 131 of 132 randomly selected Triple Accuracy cases and on 229 of 234 randomly selected Contextual Consistency cases. The human evaluation results are shown in Table 8 . We can find that the Triple Accuracy and the Triple Consistency is acceptable while the Contextual Consistency still needs to be improved. The Contextual Consistency problem is a weakness of distant supervision, and we leave this to our future work.
Conclusion
In this paper, we propose a novel QA based IE framework named QA4IE to address the weaknesses of previous IE solutions. In our framework (Figure 1 ), we divide the complicated IE problem into four steps and show that the step 1, 2 and 4 can be solved well enough by existing work. For the most difficult step 3, we transform it to a QA problem and solve it with our QA model. To train this QA model, we construct a large IE benchmark named QA4IE benchmark that consists of 293K documents and 2 million golden relation triples with 636 different relation types. To our best knowledge, our QA4IE benchmark is the largest document level IE benchmark. We compare our system with existing best IE baseline systems on our QA4IE benchmark and the results show that our system achieves a great improvement over baseline systems.
For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper. Besides, processing longer documents and improving the quality of our benchmark are all challenging problems as we mentioned previously. We hope this work can provide new thoughts for the area of information extraction.
Acknowledgements
W. Zhang is the corresponding author of this paper. The work done by SJTU is sponsored by National Natural Science Foundation of China (61632017, 61702327, 61772333) and Shanghai Sailing Program (17YF1428200). | No |
e7c0cdc05b48889905cc03215d1993ab94fb6eaa | e7c0cdc05b48889905cc03215d1993ab94fb6eaa_0 | Q: Was this benchmark automatically created from an existing dataset?
Text: Introduction and Background
Information Extraction (IE), which refers to extracting structured information (i.e., relation tuples) from unstructured text, is the key problem in making use of large-scale texts. High quality extracted relation tuples can be used in various downstream applications such as Knowledge Base Population BIBREF0 , Knowledge Graph Acquisition BIBREF1 , and Natural Language Understanding. However, existing IE systems still cannot produce high-quality relation tuples to effectively support downstream applications.
Previous IE Systems
Most of previous IE systems can be divided into Relation Extraction (RE) based systems BIBREF2 , BIBREF3 and Open IE systems BIBREF4 , BIBREF5 , BIBREF6 .
Early work on RE decomposes the problem into Named Entity Recognition (NER) and relation classification. With the recent development of neural networks (NN), NN based NER models BIBREF7 , BIBREF8 and relation classification models BIBREF9 show better performance than previous handcrafted feature based methods. The recently proposed RE systems BIBREF10 , BIBREF11 try to jointly perform entity recognition and relation extraction to improve the performance. One limitation of existing RE benchmarks, e.g., NYT BIBREF12 , Wiki-KBP BIBREF13 and BioInfer BIBREF14 , is that they only involve 24, 19 and 94 relation types respectively comparing with thousands of relation types in knowledge bases such as DBpedia BIBREF15 , BIBREF16 . Besides, existing RE systems can only extract relation tuples from a single sentence while the cross-sentence information is ignored. Therefore, existing RE based systems are not powerful enough to support downstream applications in terms of performance or scalability.
On the other hand, early work on Open IE is mainly based on bootstrapping and pattern learning methods BIBREF17 . Recent work incorporates lexical features and sentence parsing results to automatically build a large number of pattern templates, based on which the systems can extract relation tuples from an input sentence BIBREF4 , BIBREF5 , BIBREF6 . An obvious weakness is that the extracted relations are formed by free texts which means they may be polysemous or synonymous and thus cannot be directly used without disambiguation and aggregation. The extracted free-text relations also bring extra manual evaluation cost, and how to automatically evaluate different Open IE systems fairly is an open problem. Stanovsky and Dagan BIBREF18 try to solve this problem by creating an Open IE benchmark with the help of QA-SRL annotations BIBREF19 . Nevertheless, the benchmark only involves 10K golden relation tuples. Hence, Open IE in its current form cannot provide a satisfactory solution to high-quality IE that supports downstream applications.
There are some recently proposed IE approaches which try to incorporate Question Answering (QA) techniques into IE. Levy et al. BIBREF20 propose to reduce the RE problem to answering simple reading comprehension questions. They build a question template for each relation type, and by asking questions with a relevant sentence and the first entity given, they can obtain relation triples from the sentence corresponding to the relation type and the first entity. Roth et al. BIBREF21 further improve the model performance on a similar problem setting. However, these approaches focus on sentence level relation argument extractions and do not provide a full-stack solution to general IE. In particular, they do not provide a solution to extract the first entity and its corresponding relation types before applying QA. Besides, sentence level relation extraction ignores the information across sentences such as coreference and inference between sentences, which greatly reduces the information extracted from the documents.
QA4IE Framework
To overcome the above weaknesses of existing IE systems, we propose a novel IE framework named QA4IE to perform document level general IE with the help of state-of-the-art approaches in Question Answering (QA) and Machine Reading Comprehension (MRC) area.
The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \lbrace e_i, r_{ij}, e_j\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation. We ignore the adverbials and only consider the entity pairs and their relations as in standard RE settings. Note that we process the entire document as a whole instead of processing individual sentences separately as in previous systems. As shown in Figure 1 , our QA4IE framework consists of four key steps:
Recognize all the candidate entities in the input document $D$ according to the knowledge base $K$ . These entities serve as the first entity $e_i$ in the relation triples $R$ .
For each candidate entity $e_i$ , discover the potential relations/properties as $r_{ij}$ from the knowledge base $K$ .
Given a candidate entity-relation or entity-property pair $\lbrace e_i, r_{ij}\rbrace $ as a query, find the corresponding entity or value $e_j$ in the input document $D$ using a QA system. The query here can be directly formed by the word sequence of $\lbrace e_i, r_{ij}\rbrace $ , or built from templates as in BIBREF20 .
Since the results of step 3 are formed by free texts in the input document $D$ , we need to link the results to the knowledge base $K$ .
This framework determines each of the three elements in relation triples step by step. Step 1 is equivalent to named entity recognition (NER), and state-of-the-art NER systems BIBREF22 , BIBREF8 can achieve over 0.91 F1-score on CoNLL'03 BIBREF23 , a well-known NER benchmark. For attribution discovery in step 2, we can take advantage of existing knowledge base ontologies such as Wikipedia Ontology to obtain a candidate relation/property list according to NER results in step 1. Besides, there is also some existing work on attribution discovery BIBREF24 , BIBREF25 and ontology construction BIBREF26 that can be used to solve the problem in step 2. The most difficult part in our framework is step 3 in which we need to find the entity (or value) $e_j$ in document $D$ according to the previous entity-relation (or entity-property) pair $\lbrace e_i, r_{ij}\rbrace $ . Inspired by recent success in QA and MRC BIBREF27 , BIBREF28 , BIBREF29 , we propose to solve step 3 in the setting of SQuAD BIBREF30 which is a very popular QA task. The problem setting of SQuAD is that given a document $\tilde{D}$ and a question $q$ , output a segment of text $a$ in $\tilde{D}$ as the answer to the question. In our framework, we assign the input document $D$ as $\tilde{D}$ and the entity-relation (or entity-property) pair $\lbrace e_i, r_{ij}\rbrace $ as $D$0 , and then we can get the answer $D$1 with a QA model. Finally in step 4, since the QA model can only produce answers formed by input free texts, we need to link the answer $D$2 to an entity $D$3 in the knowledge base $D$4 , and the entity $D$5 will form the target relation triple as $D$6 . Existing Entity Linking (EL) systems BIBREF31 , BIBREF32 directly solve this problem especially when we have high quality QA results from step 3.
As mentioned above, step 1, 2 and 4 in the QA4IE framework can be solved by existing work. Therefore, in this paper, we mainly focus on step 3. According to the recent progress in QA and MRC, deep neural networks are very good at solving this kind of problem with a large-scale dataset to train the network. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Inspired by WikiReading BIBREF33 , a recent large-scale QA benchmark over Wikipedia, we find that the articles in Wikipedia together with the high quality triples in knowledge bases such as Wikidata BIBREF34 and DBpedia can form the supervision we need. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.
Recent success on QA and MRC is mainly attributed to advanced deep learning architectures such as attention-based and memory-augmented neural networks BIBREF35 , BIBREF36 and the availability of large-scale datasets BIBREF37 , BIBREF38 especially SQuAD. The differences between step 3 and SQuAD can be summarized as follows. First, the answer to the question in SQuAD is restricted to a continuous segment of the input text, but in QA4IE, we remove this constraint which may reduce the number of target relation triples. Second, in existing QA and MRC benchmarks, the input documents are not very long and the questions may be complex and difficult to understand by the model, while in QA4IE, the input documents may be longer but the questions formed by entity-relation (or entity-property) pair are much simpler. Therefore, in our model, we incorporate Pointer Networks BIBREF39 to adapt to the answers formed by any words within the document in any order as well as Self-Matching Networks BIBREF29 to enhance the ability on modeling longer input documents.
Contributions
The contributions of this paper are as follows:
We propose a novel IE framework named QA4IE to overcome the weaknesses of existing IE systems. As we discussed above, the problem of step 1, 2 and 4 can be solved by existing work and we propose to solve the problem of step 3 with QA models.
To train a high quality neural network QA model, we build a large IE benchmark in QA style named QA4IE benchmark which consists of 293K Wikipedia articles and 2 million golden relation triples with 636 different relation types.
To adapt QA models to the IE problem, we propose an approach that enhances existing QA models with Pointer Networks and Self-Matching Networks.
We compare our model with IE baselines on our QA4IE benchmark and achieve a great improvement over previous baselines.
We open source our code and benchmark for repeatable experiments and further study of IE.
QA4IE Benchmark Construction
This section briefly presents the construction pipeline of QA4IE benchmark to solve the problem of step 3 as in our framework (Figure 1 ). Existing largest IE benchmark BIBREF18 is created with the help of QA-SRL annotations BIBREF19 which consists of 3.2K sentences and 10K golden extractions. Following this idea, we study recent large-scale QA and MRC datasets and find that WikiReading BIBREF33 creates a large-scale QA dataset based on Wikipedia articles and WikiData relation triples BIBREF34 . However, we observe about 11% of QA pairs with errors such as wrong answer locations or mismatch between answer string and answer words. Besides, there are over 50% of QA pairs with the answer involving words out of the input text or containing multiple answers. We consider these cases out of the problem scope of this paper and only focus on the information within the input text.
Therefore, we choose to build the benchmark referring the implementation of WikiReading based on Wikipedia articles and golden triples from Wikidata and DBpedia BIBREF15 , BIBREF16 . Specifically, we build our QA4IE benchmark in the following steps.
Dump and Preprocessing. We dump the English Wikipedia articles with Wikidata knowledge base and match each article with its corresponding relation triples according to its title. After cleaning data by removing low frequency tokens and special characters, we obtain over 4M articles and 18M triples with over 800 relation types.
Clipping. We discard the triples with multiple entities (or values) for $e_j$ (account for about 6%, e.g., a book may have multiple authors). Besides, we discard the triples with any word in $e_j$ out of the corresponding article (account for about 50%). After this step, we obtain about 3.5M articles and 9M triples with 636 relation types.
Incorporating DBpedia. Unlike WikiData, DBpedia is constructed automatically without human verification. Relations and properties in DBpedia are coarse and noisy. Thus we fix the existing 636 relation types in WikiData and build a projection from DBpedia relations to these 636 relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations. Then we gather all the DBpedia triples with the first entity is corresponding to one of the above 3.5M articles and the relation is one of the projected 148 relations. After the same clipping process as above and removing the repetitive triples, we obtain 394K additional triples in 302K existing Wikipedia articles.
Distillation. Since our benchmark is for IE, we prefer the articles with more golden triples involved by assuming that Wikipedia articles with more annotated triples are more informative and better annotated. Therefore, we figure out the distribution of the number of golden triples in articles and decide to discard the articles with less than 6 golden triples (account for about 80%). After this step, we obtain about 200K articles and 1.4M triples with 636 relation types.
Query and Answer Assignment. For each golden triple $\lbrace e_i, r_{ij}, e_j\rbrace $ , we assign the relation/property $r_{ij}$ as the query and the entity $e_j$ as the answer because the Wikipedia article and its corresponding golden triples are all about the same entity $e_i$ which is unnecessary in the queries. Besides, we find the location of each $e_j$ in the corresponding article as the answer location. As we discussed in Section 1, we do not restrict $e_j$ to a continuous segment in the article as required in SQuAD. Thus we first try to detect a matched span for each $e_j$ and assign this span as the answer location. Then for each of the rest $e_j$ which has no matched span, we search a matched sub-sequence in the article and assign the index sequence as the answer location. We name them span-triples and seq-triples respectively. Note that each triple will have an answer location because we have discarded the triples with unseen words in $e_j$ and if we can find multiple answer locations, all of them will be assigned as ground truths.
Dataset Splitting. For comparing the performance on span-triples and seq-triples, we set up two different datasets named QA4IE-SPAN and QA4IE-SEQ. In QA4IE-SPAN, only articles with all span-triples are involved, while in QA4IE-SEQ, articles with seq-triples are also involved. For studying the influence of the article length as longer articles are normally more difficult to model by LSTMs, we split the articles according to the article length. We name the set of articles with lengths shorter than 400 as S, lengths between 400 and 700 as M, lengths greater than 700 as L. Therefore we obtain 6 different datasets named QA4IE-SPAN-S/M/L and QA4IE-SEQ-S/M/L. A 5/1/5 splitting of train/dev/test sets is performed. The detailed statistics of QA4IE benchmark are provided in Table 1 .
We further compare our QA4IE benchmark with some existing IE and QA benchmarks in Table 2 . One can observe that QA4IE benchmark is much larger than previous IE and QA benchmarks except for WikiReading and Zero-Shot Benchmark. However, as we mentioned at the beginning of Section 2, WikiReading is problematic for IE settings. Besides, Zero-Shot Benchmark is a sentence-level dataset and we have described the disadvantage of ignoring information across sentences at Section 1.1. Thus to our best knowledge, QA4IE benchmark is the largest document level IE benchmark and it can be easily extended if we change our distillation strategy.
Question Answering Model
In this section, we describe our Question Answering model for IE. The model overview is illustrated in Figure 2 .
The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as
$$\begin{split} g_t &= {\rm sigmoid}(W_gx_t+b_g) \\ s_t &= {\rm relu } (W_xx_t+b_x) \\ u_t &= g_t \odot s_t + (1 - g_t) \odot x_t~. \end{split}$$ (Eq. 18)
Here $W_g, W_x \in \mathbb {R}^{d \times 2d}$ and $b_g, b_x \in \mathbb {R}^d$ are trainable weights, $u_t$ is a $d$ -dimension vector. The function relu is the rectified linear units BIBREF43 and $\odot $ is element-wise multiply over two vectors. The same Highway Layer is applied to $q_t$ and produces $v_t$ .
Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:
$$\begin{split} u_t^{^{\prime }} &= {\rm BiLSTM}(u^{^{\prime }}_{t-1},u_t) \\ v_t^{^{\prime }} &= {\rm BiLSTM}(v^{^{\prime }}_{t-1},v_t)~. \end{split}$$ (Eq. 19)
Here we obtain $\mathbf {U} = [u_1^{^{\prime }}, ... , u_n^{^{\prime }}] \in \mathbb {R}^{2d \times n}$ and $\mathbf {V} = [v_1^{^{\prime }}, ... , v_m^{^{\prime }}] \in \mathbb {R}^{2d \times m}$ . Then we feed $\mathbf {U}$ and $\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. We obtain the $8d$ -dimension query-aware context embedding vectors $h_1, ... , h_n$ as the result.
After modeling interactions between the input text and queries, we need to enhance the interactions within the input text words themselves especially for the longer text in IE settings. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as
$$\begin{split} o_t &= {\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\ s_j^t &= w^T {\rm tanh}(W_hh_j+\tilde{W_h}h_t)\\ \alpha _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\alpha _i^th_i ~. \end{split}$$ (Eq. 20)
Here $W_h, \tilde{W_h} \in \mathbb {R}^{d \times 8d}$ and $w \in \mathbb {R}^d$ are trainable weights, $[h, c]$ is vector concatenation across row. Besides, $\alpha _i^t$ is the attention weight from the $t^{th}$ word to the $i^{th}$ word and $c_t$ is the enhanced contextual embeddings over the $t^{th}$ word in the input text. We obtain the $2d$ -dimension query-aware and self-enhanced embeddings of input text after this step. Finally we feed the embeddings $\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as
$$\begin{split} p_t &= {\rm LSTM}(p_{t-1}, c_t) \\ s_j^t &= w^T {\rm tanh}(W_oo_j+W_pp_{t-1})\\ \beta _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\beta _i^to_i~. \end{split}$$ (Eq. 21)
The initial state of LSTM $p_0$ is $o_n$ . We can then model the probability of the $t^{th}$ token $a^t$ by
$$& {\rm P}(a^t | a^1, ... , a^{t-1}, \mathbf {O}) = (\beta _1^t, \beta _2^t, ... , \beta _n^t, \beta _{n+1}^t) \nonumber \\ & {\rm P}(a^t_i) \triangleq {\rm P}(a^t = i|a^1, ... , a^{t-1}, \mathbf {O}) = \beta _i^t ~.$$ (Eq. 22)
Here $\beta _{n+1}^t$ denotes the probability of generating the “ ${\rm eos}$ ” symbol since the decoder also needs to determine when to stop. Therefore, the probability of generating the answer sequence $\textbf {a}$ is as follows
$${\rm P}(\textbf {a}|\mathbf {O}) = \prod _t {\rm P}(a^t | a^1, ... , a^{t-1}, \mathbf {O})~.$$ (Eq. 23)
Given the supervision of answer sequence $\mathbf {y} = (y_1, ... , y_L)$ , we can write down the loss function of our model as
$${\rm L(\theta )} = -\sum _{t=1}^L \log {\rm P} (a^t_{y_t})~.$$ (Eq. 24)
To train our model, we minimize the loss function ${\rm L(\theta )}$ based on training examples.
Experimental Setup
We build our QA4IE benchmark following the steps described in Section 2. In experiments, we train and evaluate our QA models on the corresponding train and test sets while the hyper-parameters are tuned on dev sets. In order to make our experiments more informative, we also evaluate our model on SQuAD dataset BIBREF30 .
The preprocessing of our QA4IE benchmark and SQuAD dataset are all performed with the open source code from BIBREF27 . We use 100 1D filters with width 5 to construct the CharCNN in our char embedding layer. We set the hidden size $d=100$ for all the hidden states in our model. The optimizer we use is the AdaDelta optimizer BIBREF45 with an initial learning rate of 2. A dropout BIBREF46 rate of 0.2 is applied in all the CNN, LSTM and linear transformation layers in our model during training. For SQuAD dataset and our small sized QA4IE-SPAN/SEQ-S datasets, we set the max length of input texts as 400 and a mini-batch size of 20. For middle sized (and large sized) QA4IE datasets, we set the max length as 700 (800) and batch size as 7 (5). We introduce an early stopping in training process after 10 epochs. Our model is trained on a GTX 1080 Ti GPU and it takes about 14 hours on small sized QA4IE datasets. We implement our model with TensorFlow BIBREF47 and optimize the computational expensive LSTM layers with LSTMBlockFusedCell.
Results in QA Settings
We first perform experiments in QA settings to evaluate our QA model on both SQuAD dataset and QA4IE benchmark. Since our goal is to solve IE, not QA, the motivation of this part of experiments is to evaluate the performance of our model and make a comparison between QA4IE benchmark and existing datasets. Two metrics are introduced in the SQuAD dataset: Exact Match (EM) and F1-score. EM measures the percentage that the model prediction matches one of the ground truth answers exactly while F1-score measures the overlap between the prediction and ground truth answers. Our QA4IE benchmark also adopts these two metrics.
Table 3 presents the results of our QA model on SQuAD dataset. Our model outperforms the previous sequence model but is not competitive with span models because it is designed to produce sequence answers in IE settings while baseline span models are designed to produce span answers for SQuAD dataset.
The comparison between our QA model and two baseline QA models on our QA4IE benchmark is shown in Table 4 . For training of both baseline QA models, we use the same configuration of max input length as our model and tune the rest of hyper-parameters on dev sets. Our model outperforms these two baselines on all 6 datasets. The performance is good on S and M datasets but worse for longer documents. As we mentioned in Section 4.1, we set the max input length as 800 and ignore the rest words on L datasets. Actually, there are 11% of queries with no answers in the first 800 words in our benchmark. Processing longer documents is a tough problem BIBREF51 and we leave this to our future work.
To study the improvement of each component in our model, we present model ablation study results in Table 5 . We do not involve Attention Flow Layer and Pointer Network Decoder as they cannot be replaced by other architectures with the model still working. We can observe that the first three components can effectively improve the performance but Self Matching Layer makes the training more computationally expensive by 40%. Besides, the LSTMBlockFusedCell works effectively and accelerates the training process by 6 times without influencing the performance.
Results in IE Settings
In this subsection, we put our QA model in the entire pipeline of our QA4IE framework (Figure 1 ) and evaluate the framework in IE settings. Existing IE systems are all free-text based Open IE systems, so we need to manually evaluate the free-text based results in order to compare our model with the baselines. Therefore, we conduct experiments on a small dataset, the dev set of QA4IE-SPAN-S which consists of 4393 documents and 28501 ground truth queries.
Our QA4IE benchmark is based on Wikipedia articles and all the ground truth triples of each article have the same first entity (i.e. the title of the article). Thus, we can directly use the title of the article as the first entity of each triple without performing step 1 (entity recognition) in our framework. Besides, all the ground truth triples in our benchmark are from knowledge base where they are disambiguated and aggregated in the first place, and therefore step 4 (entity linking) is very simple and we do not evaluate it in our experiments.
A major difference between QA settings and IE settings is that in QA settings, each query corresponds to an answer, while in the QA4IE framework, the QA model take a candidate entity-relation (or entity-property) pair as the query and it needs to tell whether an answer to the query can be found in the input text. We can consider the IE settings here as performing step 2 and then step 3 in the QA4IE framework.
In step 2, we need to build a candidate query list for each article in the dataset. Instead of incorporating existing ontology or knowledge base, we use a simple but effective way to build the candidate query list of an article. Since we have a ground truth query list with labeled answers of each article, we can add all the neighboring queries of each ground truth query into the query list. The neighboring queries are defined as two queries that co-occur in the same ground truth query list of any articles in the dataset. We transform the dev set of QA4IE-SPAN-S above by adding neighboring queries into the query list. After this step, the number of queries grows to 426336, and only 28501 of them are ground truth queries labeled with an answer.
In step 3, we require our QA model to output a confidence score along with the answer to each candidate query. Our QA model produces no answer to a query when the confidence score is less than a threshold $\delta $ or the output is an “ ${\rm eos}$ ” symbol. For the answers with a confidence score $\ge \delta $ , we evaluate them by the EM measurement with ground truth answers and count the true positive samples in order to calculate the precision and recall under the threshold $\delta $ . Specifically, we try two confidence scores calculated as follows:
$$\begin{split} {\rm Score_{mul}} = \prod _{t=1}^L{\rm P}(a^t_{i_t}),~~~{\rm Score_{avg}} = \sum _{t=1}^L{\rm P}(a^t_{i_t}) / L ~, \end{split}$$ (Eq. 34)
where $(a^1_{i_1}, ... , a^L_{i_L})$ is the answer sequence and ${\rm P}(a^t_i)$ is defined in Eq. ( 22 ). ${\rm Score_{mul}}$ is equivalent to the training loss in Eq. ( 24 ) and ${\rm Score_{avg}}$ takes the answer length into account.
The precision-recall curves of our framework based on the two confidence scores are plotted in Figure 3 . We can observe that the EM rate we achieve in QA settings is actually the best recall (91.87) in this curve (by setting $\delta = 0$ ). The best F1-scores of the two curves are 29.97 (precision $= 21.61$ , recall $= 48.85$ , $\delta = 0.91$ ) for ${\rm Score_{mul}}$ and 31.05 (precision $= 23.93$ , recall $= 44.21$ , $\delta = 0.97$ ) for ${\rm Score_{avg}}$ . ${\rm Score_{avg}}$ is better than $= 21.61$0 , which suggests that the answer length should be taken into account.
We then evaluate existing IE systems on the dev set of QA4IE-SPAN-S and empirically compare them with our framework. Note that while BIBREF20 is closely related to our work, we cannot fairly compare our framework with BIBREF20 because their systems are in the sentence level and require additional negative samples for training. BIBREF21 is also related to our work, but their dataset and code have not been published yet. Therefore, we choose to evaluate three popular Open IE systems, Open IE 4 BIBREF6 , Stanford IE BIBREF4 and ClauseIE BIBREF5 .
Since Open IE systems take a single sentence as input and output a set of free-text based triples, we need to find the sentences involving ground truth answers and feed the sentences into the Open IE systems. In the dev set of QA4IE-SPAN-S, there are 28501 queries with 44449 answer locations labeled in the 4393 documents. By feeding the 44449 sentences into the Open IE systems, we obtain a set of extracted triples from each sentence. We calculate the number of true positive samples by first filtering out triples with less than 20% words overlapping with ground truth answers and then asking two human annotators to verify the remaining triples independently. Since in the experiments, our framework is given the ground-truth first entity of each triple (the title of the corresponding Wikipedia article) while the baseline systems do not have this information, we ask our human annotators to ignore the mistakes on the first entities when evaluating triples produced by the baseline systems to offset this disadvantage. For example, the 3rd case of ClauseIE and the 4th case of Open IE 4 in Table 7 are all labeled as correct by our annotators even though the first entities are pronouns. The two human annotators reached an agreement on 191 out of 195 randomly selected cases.
The evaluation results of the three Open IE baselines are shown in Table 6 . We can observe that most of the extracted triples are not related to ground truths and the precision and recall are all very low (around 1%) although we have already helped the baseline systems locate the sentences containing ground truth answers.
Case Study
In this subsection, we perform case studies of IE settings in Table 7 to better understand the models and benchmarks. The baseline Open IE systems produce triples by analyzing the subjects, predicates and objects in input sentences, and thus our annotators lower the bar of accepting triples. However, the analysis on semantic roles and parsing trees cannot work very well on complicated input sentences like the 2nd and the 3rd cases. Besides, the baseline systems can hardly solve the last two cases which require inference on input sentences.
Our framework works very well on this dataset with the QA measurements EM $= 91.87$ and F1 $= 93.53$ and the IE measurements can be found in Figure 3 . Most of the error cases are the fourth case which is acceptable by human annotators. Note that our framework takes the whole document as the input while the baseline systems take the individual sentence as the input, which means the experiment setting is much more difficult for our framework.
Human Evaluation on QA4IE Benchmark
Finally, we perform a human evaluation on our QA4IE benchmark to verify the reliability of former experiments. The evaluation metrics are as follows:
Triple Accuracy is to check whether each ground truth triple is accurate (one cannot find conflicts between the ground truth triple and the corresponding article) because the ground truth triples from WikiData and DBpedia may be incorrect or incomplete.
Contextual Consistency is to check whether the context of each answer location is consistent with the corresponding ground truth triple (one can infer from the context to obtain the ground truth triple) because we keep all matched answer locations as ground truths but some of them may be irrelevant with the corresponding triple.
Triple Consistency is to check whether there is at least one answer location that is contextually consistent for each ground truth triple. It can be calculated by counting the results of Contextual Consistency.
We randomly sample 25 articles respectively from the 6 datasets (in total of 1002 ground truth triples with 2691 labeled answer locations) and let two human annotators label the Triple Accuracy for each ground truth triple and the Contextual Consistency for each answer location. The two human annotators reached an agreement on 131 of 132 randomly selected Triple Accuracy cases and on 229 of 234 randomly selected Contextual Consistency cases. The human evaluation results are shown in Table 8 . We can find that the Triple Accuracy and the Triple Consistency is acceptable while the Contextual Consistency still needs to be improved. The Contextual Consistency problem is a weakness of distant supervision, and we leave this to our future work.
Conclusion
In this paper, we propose a novel QA based IE framework named QA4IE to address the weaknesses of previous IE solutions. In our framework (Figure 1 ), we divide the complicated IE problem into four steps and show that the step 1, 2 and 4 can be solved well enough by existing work. For the most difficult step 3, we transform it to a QA problem and solve it with our QA model. To train this QA model, we construct a large IE benchmark named QA4IE benchmark that consists of 293K documents and 2 million golden relation triples with 636 different relation types. To our best knowledge, our QA4IE benchmark is the largest document level IE benchmark. We compare our system with existing best IE baseline systems on our QA4IE benchmark and the results show that our system achieves a great improvement over baseline systems.
For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper. Besides, processing longer documents and improving the quality of our benchmark are all challenging problems as we mentioned previously. We hope this work can provide new thoughts for the area of information extraction.
Acknowledgements
W. Zhang is the corresponding author of this paper. The work done by SJTU is sponsored by National Natural Science Foundation of China (61632017, 61702327, 61772333) and Shanghai Sailing Program (17YF1428200). | No |
99760276cfd699e55b827ceeb653b31b043b9ceb | 99760276cfd699e55b827ceeb653b31b043b9ceb_0 | Q: How does morphological analysis differ from morphological inflection?
Text: Introduction
The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind.
The situation for endangered languages is usually even worse, as the focus of the scientific community mostly relies in language documentation. The typical endangered language documentation process typically includes the creation of language resources in the form of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored into large online linguistics archives. This process is often hindered by the so-called Transcription Bottleneck, but recent advances BIBREF0, BIBREF1 provide promising directions towards a solution for this issue.
However, language documentation and linguistic description, although extremely important itself, does not meaningfully contribute to language conservation, which aims to ensure that the language stays in use. We believe that a major avenue towards continual language use is by further creating language technologies for endangered languages, essentially elevating them to the same level as high-resource, economically or politically stronger languages.
The majority of the world's languages are categorized as synthetic, meaning that they have rich morphology, be it fusional, agglutinative, polysynthetic, or a mixture thereof. As Natural Language Processing (NLP) keeps expanding its frontiers to encompass more and more languages, modeling of the grammatical functions that guide language generation is of utmost importance. It follows, then, that the next crucial step for expanding NLP research on endangered languages is creating benchmarks for standard NLP tasks in such languages.
With this work we take a small first step towards this direction. We present a resource that allows for benchmarking two NLP tasks in San Juan Quiahije Chatino, an endangered language spoken in southern Mexico: morphological analysis and morphological inflection, with a focus on the verb morphology of the language.
We first briefly discuss the Chatino language and the intricacies of its verb morphology (§SECREF2), then describe the resource (§SECREF3), and finally present baseline results on both the morphological analysis and the inflection tasks using state-of-the-art neural models (§SECREF4). We make our resource publicly available online.
The Chatino Language
Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers.
The Chatino Language ::: Typology and Writing System
Eastern Chatino languages , including SJQ Chatino, are intensively tonal BIBREF2, BIBREF3. Tones mark both lexical and grammatical distinctions in Eastern Chatino languages.
In SJQ Chatino, there are eleven tones. Three different systems for representing tone distinctions are employed in the literature: the S-H-M-L system of BIBREF2; the numeral system of BIBREF4; and the alphabetic system of BIBREF3. The correspondences among these three systems are given in Table . For present purposes, we will use numeral representations of the second sort. The number 1 represents a high pitch, 4 represents a low pitch, and double digits represent contour tones.
The Chatino Language ::: Verb Morphology
SJQ Chatino verb inflection distinguishes four aspect/mood categories: completive (`I did'), progressive (`I am doing'), habitual (`I habitually do') and potential (`I might do'). In each of these categories, verbs inflect for three persons (first, second, third) and two numbers (singular, plural) and distinguish inclusive and exclusive categories of the first person plural (`we including you' vs `we excluding you'). Verbs can be classified into dozens of different conjugation classes. Each conjugation class involves its own tone pattern; each tone pattern is based on a series of three person/number (PN) triplets. A PN triplet [X, Y, Z] consists of three tones: tone X is employed in the third person singular as well as in all plural forms; tone Y is employed in the second person singular, and tone Z, in the third person singular. Thus, a verb's membership in a particular conjugation class entails the assignment of one tone triplet to completive forms, another to progressive forms, and a third to habitual and potential forms. The paradigm of the verb lyu1 `fall' in Table illustrates: the conjugation class to which this verb belongs entails the assignment of the triplet [1, 42, 20] to the completive, [1, 42, 32] to the progressive, and [20, 42, 32] to the habitual and potential. Verbs in other conjugation classes exhibit other triplet series.
The Resource
We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:
Person: first (1), second (2), and third (3)
Number: singular (SG) ad plural (PL)
Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)
Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).
Two examples of complete inflection tables for the verbs ndyu2 `fell from above' and lyu1 `fall' are shown in Table . Note how the first verb has the same PN triplet for all four aspect/mood categories, while the second paradigm is more representative in that it involves three different triplets (one for the completive, another for the progressive, and another for the habitual/potential). This variety is at the core of why the SJQ verb morphology is particularly interesting, and a challenging testcase for modern NLP systems.
In total, we end up with 4716 groupings (triplets) of a lemma, a tag-set, and a form; we split these groupings randomly into a training set (3774 groupings), a development set (471 groupings), and test set (471 groupings). Basic statistics of the corpus are outlined in Table 1 . Compared to all the other languages from the Unimorph project, this puts SJQ Chatino in the low- to mid-resource category, but nonetheless it is more than enough for benchmarking purposes.
Baseline Results ::: Inflectional realization
Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection," inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.
Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.
Inflection results are outlined in Table . In the `standard' setting we simply train on the pre-defined training set, achieving an exact-match accuracy of 60% over the test set. Interestingly, the data augmentation approach of BIBREF12 that hallucinates new training paradigms based on character level alignments does not heed significant improvements in accuracy (only 2 percentage points increase, cf. with more than 15 percentage points increases in other languages). These results indicate that automatic morphological inflection for low-resource tonal languages like SJQ Chatino poses a particularly challenging setting, which perhaps requires explicit handling of tone information by the model.
Baseline Results ::: Morphological Analysis
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.
Baseline Results ::: Lemmatization
Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.
The baseline results, with and without providing gold morphological tags along with the inflected form as input, are outlined in Table . We find that automatic lemmatization in SJQ Chatino achieves fairly high accuracy even with our simple baseline models (89% accuracy, $0.27$ average Levenshtein distance) and that providing the gold morphological tags provides a performance boost indicated by small improvements on both metrics. It it worth noting, though, that these results are also well below the $94--95\%$ average accuracy and $0.13$ average Levenshtein distance that lemmatization models achieved over 107 treebanks in 66 languages for the SIGMORPHON 2019 shared task BIBREF11.
Related Work
Our work builds and expands upon previous work on Indigenous languages of the Americas specifically focusing on the complexity of their morphology. Among other works similar to ours, BIBREF17 focused on the morphology of Dene verbs, BIBREF18 on Arapaho verbs, BIBREF19 on Shipibo-Konibo, and BIBREF20 on Saint Lawrence Island and Central Siberian Yupik. BIBREF21 describe an approach for elicit complete inflection paradigms, with experiments in languages like Nahuatl. Our resource is the first one for SJQ Chatino, but it also provides an exciting new data point in the computational study of morphological analysis, lemmatization, and inflection, as it is the first one in a tonal language with explicit tonal markings in the writing system. In a similar vein, the Oto-Manguean Inflectional Class Database BIBREF22 provides a valuable resource for studying the verbal morphology of Oto-Manguean languages (including a couple of other Chatino variants: Yaitepec and Zenzotepec Chatino) but not in a format suitable for computational experiments.
Conclusion
We presented a resource of 198 complete inflectional paradigms in San Juan Quiahije Chatino, which will facilitate research in computational morphological analysis and inflection for low-resource tonal languages and languages of Mesoamerica. We also provide strong baseline results on computational morphological analysis, lemmatization, and inflection realization, using character-level neural encoder-decoder systems.
For future work, while we will keep expanding our resource to include more paradigms, we will also follow the community guidelines in extending our resource to include morphological analysis and inflection examples in context.
Acknowledgements
Part of this work was done during the Workshop on Language Technology for Language Documentation and Revitalization. This material is based upon work generously supported by the National Science Foundation under grant 1761548. | Morphological analysis is the task of creating a morphosyntactic description for a given word, inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form |
247e1fe052230458ce11b98e3637acf0b86795cd | 247e1fe052230458ce11b98e3637acf0b86795cd_0 | Q: What was the criterion used for selecting the lemmata?
Text: Introduction
The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind.
The situation for endangered languages is usually even worse, as the focus of the scientific community mostly relies in language documentation. The typical endangered language documentation process typically includes the creation of language resources in the form of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored into large online linguistics archives. This process is often hindered by the so-called Transcription Bottleneck, but recent advances BIBREF0, BIBREF1 provide promising directions towards a solution for this issue.
However, language documentation and linguistic description, although extremely important itself, does not meaningfully contribute to language conservation, which aims to ensure that the language stays in use. We believe that a major avenue towards continual language use is by further creating language technologies for endangered languages, essentially elevating them to the same level as high-resource, economically or politically stronger languages.
The majority of the world's languages are categorized as synthetic, meaning that they have rich morphology, be it fusional, agglutinative, polysynthetic, or a mixture thereof. As Natural Language Processing (NLP) keeps expanding its frontiers to encompass more and more languages, modeling of the grammatical functions that guide language generation is of utmost importance. It follows, then, that the next crucial step for expanding NLP research on endangered languages is creating benchmarks for standard NLP tasks in such languages.
With this work we take a small first step towards this direction. We present a resource that allows for benchmarking two NLP tasks in San Juan Quiahije Chatino, an endangered language spoken in southern Mexico: morphological analysis and morphological inflection, with a focus on the verb morphology of the language.
We first briefly discuss the Chatino language and the intricacies of its verb morphology (§SECREF2), then describe the resource (§SECREF3), and finally present baseline results on both the morphological analysis and the inflection tasks using state-of-the-art neural models (§SECREF4). We make our resource publicly available online.
The Chatino Language
Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers.
The Chatino Language ::: Typology and Writing System
Eastern Chatino languages , including SJQ Chatino, are intensively tonal BIBREF2, BIBREF3. Tones mark both lexical and grammatical distinctions in Eastern Chatino languages.
In SJQ Chatino, there are eleven tones. Three different systems for representing tone distinctions are employed in the literature: the S-H-M-L system of BIBREF2; the numeral system of BIBREF4; and the alphabetic system of BIBREF3. The correspondences among these three systems are given in Table . For present purposes, we will use numeral representations of the second sort. The number 1 represents a high pitch, 4 represents a low pitch, and double digits represent contour tones.
The Chatino Language ::: Verb Morphology
SJQ Chatino verb inflection distinguishes four aspect/mood categories: completive (`I did'), progressive (`I am doing'), habitual (`I habitually do') and potential (`I might do'). In each of these categories, verbs inflect for three persons (first, second, third) and two numbers (singular, plural) and distinguish inclusive and exclusive categories of the first person plural (`we including you' vs `we excluding you'). Verbs can be classified into dozens of different conjugation classes. Each conjugation class involves its own tone pattern; each tone pattern is based on a series of three person/number (PN) triplets. A PN triplet [X, Y, Z] consists of three tones: tone X is employed in the third person singular as well as in all plural forms; tone Y is employed in the second person singular, and tone Z, in the third person singular. Thus, a verb's membership in a particular conjugation class entails the assignment of one tone triplet to completive forms, another to progressive forms, and a third to habitual and potential forms. The paradigm of the verb lyu1 `fall' in Table illustrates: the conjugation class to which this verb belongs entails the assignment of the triplet [1, 42, 20] to the completive, [1, 42, 32] to the progressive, and [20, 42, 32] to the habitual and potential. Verbs in other conjugation classes exhibit other triplet series.
The Resource
We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:
Person: first (1), second (2), and third (3)
Number: singular (SG) ad plural (PL)
Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)
Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).
Two examples of complete inflection tables for the verbs ndyu2 `fell from above' and lyu1 `fall' are shown in Table . Note how the first verb has the same PN triplet for all four aspect/mood categories, while the second paradigm is more representative in that it involves three different triplets (one for the completive, another for the progressive, and another for the habitual/potential). This variety is at the core of why the SJQ verb morphology is particularly interesting, and a challenging testcase for modern NLP systems.
In total, we end up with 4716 groupings (triplets) of a lemma, a tag-set, and a form; we split these groupings randomly into a training set (3774 groupings), a development set (471 groupings), and test set (471 groupings). Basic statistics of the corpus are outlined in Table 1 . Compared to all the other languages from the Unimorph project, this puts SJQ Chatino in the low- to mid-resource category, but nonetheless it is more than enough for benchmarking purposes.
Baseline Results ::: Inflectional realization
Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection," inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.
Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.
Inflection results are outlined in Table . In the `standard' setting we simply train on the pre-defined training set, achieving an exact-match accuracy of 60% over the test set. Interestingly, the data augmentation approach of BIBREF12 that hallucinates new training paradigms based on character level alignments does not heed significant improvements in accuracy (only 2 percentage points increase, cf. with more than 15 percentage points increases in other languages). These results indicate that automatic morphological inflection for low-resource tonal languages like SJQ Chatino poses a particularly challenging setting, which perhaps requires explicit handling of tone information by the model.
Baseline Results ::: Morphological Analysis
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.
Baseline Results ::: Lemmatization
Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.
The baseline results, with and without providing gold morphological tags along with the inflected form as input, are outlined in Table . We find that automatic lemmatization in SJQ Chatino achieves fairly high accuracy even with our simple baseline models (89% accuracy, $0.27$ average Levenshtein distance) and that providing the gold morphological tags provides a performance boost indicated by small improvements on both metrics. It it worth noting, though, that these results are also well below the $94--95\%$ average accuracy and $0.13$ average Levenshtein distance that lemmatization models achieved over 107 treebanks in 66 languages for the SIGMORPHON 2019 shared task BIBREF11.
Related Work
Our work builds and expands upon previous work on Indigenous languages of the Americas specifically focusing on the complexity of their morphology. Among other works similar to ours, BIBREF17 focused on the morphology of Dene verbs, BIBREF18 on Arapaho verbs, BIBREF19 on Shipibo-Konibo, and BIBREF20 on Saint Lawrence Island and Central Siberian Yupik. BIBREF21 describe an approach for elicit complete inflection paradigms, with experiments in languages like Nahuatl. Our resource is the first one for SJQ Chatino, but it also provides an exciting new data point in the computational study of morphological analysis, lemmatization, and inflection, as it is the first one in a tonal language with explicit tonal markings in the writing system. In a similar vein, the Oto-Manguean Inflectional Class Database BIBREF22 provides a valuable resource for studying the verbal morphology of Oto-Manguean languages (including a couple of other Chatino variants: Yaitepec and Zenzotepec Chatino) but not in a format suitable for computational experiments.
Conclusion
We presented a resource of 198 complete inflectional paradigms in San Juan Quiahije Chatino, which will facilitate research in computational morphological analysis and inflection for low-resource tonal languages and languages of Mesoamerica. We also provide strong baseline results on computational morphological analysis, lemmatization, and inflection realization, using character-level neural encoder-decoder systems.
For future work, while we will keep expanding our resource to include more paradigms, we will also follow the community guidelines in extending our resource to include morphological analysis and inflection examples in context.
Acknowledgements
Part of this work was done during the Workshop on Language Technology for Language Documentation and Revitalization. This material is based upon work generously supported by the National Science Foundation under grant 1761548. | Unanswerable |
79cfd1b82c72d18e2279792c66a042c0e9dfa6b7 | 79cfd1b82c72d18e2279792c66a042c0e9dfa6b7_0 | Q: What are the architectures used for the three tasks?
Text: Introduction
The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind.
The situation for endangered languages is usually even worse, as the focus of the scientific community mostly relies in language documentation. The typical endangered language documentation process typically includes the creation of language resources in the form of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored into large online linguistics archives. This process is often hindered by the so-called Transcription Bottleneck, but recent advances BIBREF0, BIBREF1 provide promising directions towards a solution for this issue.
However, language documentation and linguistic description, although extremely important itself, does not meaningfully contribute to language conservation, which aims to ensure that the language stays in use. We believe that a major avenue towards continual language use is by further creating language technologies for endangered languages, essentially elevating them to the same level as high-resource, economically or politically stronger languages.
The majority of the world's languages are categorized as synthetic, meaning that they have rich morphology, be it fusional, agglutinative, polysynthetic, or a mixture thereof. As Natural Language Processing (NLP) keeps expanding its frontiers to encompass more and more languages, modeling of the grammatical functions that guide language generation is of utmost importance. It follows, then, that the next crucial step for expanding NLP research on endangered languages is creating benchmarks for standard NLP tasks in such languages.
With this work we take a small first step towards this direction. We present a resource that allows for benchmarking two NLP tasks in San Juan Quiahije Chatino, an endangered language spoken in southern Mexico: morphological analysis and morphological inflection, with a focus on the verb morphology of the language.
We first briefly discuss the Chatino language and the intricacies of its verb morphology (§SECREF2), then describe the resource (§SECREF3), and finally present baseline results on both the morphological analysis and the inflection tasks using state-of-the-art neural models (§SECREF4). We make our resource publicly available online.
The Chatino Language
Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers.
The Chatino Language ::: Typology and Writing System
Eastern Chatino languages , including SJQ Chatino, are intensively tonal BIBREF2, BIBREF3. Tones mark both lexical and grammatical distinctions in Eastern Chatino languages.
In SJQ Chatino, there are eleven tones. Three different systems for representing tone distinctions are employed in the literature: the S-H-M-L system of BIBREF2; the numeral system of BIBREF4; and the alphabetic system of BIBREF3. The correspondences among these three systems are given in Table . For present purposes, we will use numeral representations of the second sort. The number 1 represents a high pitch, 4 represents a low pitch, and double digits represent contour tones.
The Chatino Language ::: Verb Morphology
SJQ Chatino verb inflection distinguishes four aspect/mood categories: completive (`I did'), progressive (`I am doing'), habitual (`I habitually do') and potential (`I might do'). In each of these categories, verbs inflect for three persons (first, second, third) and two numbers (singular, plural) and distinguish inclusive and exclusive categories of the first person plural (`we including you' vs `we excluding you'). Verbs can be classified into dozens of different conjugation classes. Each conjugation class involves its own tone pattern; each tone pattern is based on a series of three person/number (PN) triplets. A PN triplet [X, Y, Z] consists of three tones: tone X is employed in the third person singular as well as in all plural forms; tone Y is employed in the second person singular, and tone Z, in the third person singular. Thus, a verb's membership in a particular conjugation class entails the assignment of one tone triplet to completive forms, another to progressive forms, and a third to habitual and potential forms. The paradigm of the verb lyu1 `fall' in Table illustrates: the conjugation class to which this verb belongs entails the assignment of the triplet [1, 42, 20] to the completive, [1, 42, 32] to the progressive, and [20, 42, 32] to the habitual and potential. Verbs in other conjugation classes exhibit other triplet series.
The Resource
We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:
Person: first (1), second (2), and third (3)
Number: singular (SG) ad plural (PL)
Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)
Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).
Two examples of complete inflection tables for the verbs ndyu2 `fell from above' and lyu1 `fall' are shown in Table . Note how the first verb has the same PN triplet for all four aspect/mood categories, while the second paradigm is more representative in that it involves three different triplets (one for the completive, another for the progressive, and another for the habitual/potential). This variety is at the core of why the SJQ verb morphology is particularly interesting, and a challenging testcase for modern NLP systems.
In total, we end up with 4716 groupings (triplets) of a lemma, a tag-set, and a form; we split these groupings randomly into a training set (3774 groupings), a development set (471 groupings), and test set (471 groupings). Basic statistics of the corpus are outlined in Table 1 . Compared to all the other languages from the Unimorph project, this puts SJQ Chatino in the low- to mid-resource category, but nonetheless it is more than enough for benchmarking purposes.
Baseline Results ::: Inflectional realization
Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection," inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.
Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.
Inflection results are outlined in Table . In the `standard' setting we simply train on the pre-defined training set, achieving an exact-match accuracy of 60% over the test set. Interestingly, the data augmentation approach of BIBREF12 that hallucinates new training paradigms based on character level alignments does not heed significant improvements in accuracy (only 2 percentage points increase, cf. with more than 15 percentage points increases in other languages). These results indicate that automatic morphological inflection for low-resource tonal languages like SJQ Chatino poses a particularly challenging setting, which perhaps requires explicit handling of tone information by the model.
Baseline Results ::: Morphological Analysis
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.
Baseline Results ::: Lemmatization
Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.
The baseline results, with and without providing gold morphological tags along with the inflected form as input, are outlined in Table . We find that automatic lemmatization in SJQ Chatino achieves fairly high accuracy even with our simple baseline models (89% accuracy, $0.27$ average Levenshtein distance) and that providing the gold morphological tags provides a performance boost indicated by small improvements on both metrics. It it worth noting, though, that these results are also well below the $94--95\%$ average accuracy and $0.13$ average Levenshtein distance that lemmatization models achieved over 107 treebanks in 66 languages for the SIGMORPHON 2019 shared task BIBREF11.
Related Work
Our work builds and expands upon previous work on Indigenous languages of the Americas specifically focusing on the complexity of their morphology. Among other works similar to ours, BIBREF17 focused on the morphology of Dene verbs, BIBREF18 on Arapaho verbs, BIBREF19 on Shipibo-Konibo, and BIBREF20 on Saint Lawrence Island and Central Siberian Yupik. BIBREF21 describe an approach for elicit complete inflection paradigms, with experiments in languages like Nahuatl. Our resource is the first one for SJQ Chatino, but it also provides an exciting new data point in the computational study of morphological analysis, lemmatization, and inflection, as it is the first one in a tonal language with explicit tonal markings in the writing system. In a similar vein, the Oto-Manguean Inflectional Class Database BIBREF22 provides a valuable resource for studying the verbal morphology of Oto-Manguean languages (including a couple of other Chatino variants: Yaitepec and Zenzotepec Chatino) but not in a format suitable for computational experiments.
Conclusion
We presented a resource of 198 complete inflectional paradigms in San Juan Quiahije Chatino, which will facilitate research in computational morphological analysis and inflection for low-resource tonal languages and languages of Mesoamerica. We also provide strong baseline results on computational morphological analysis, lemmatization, and inflection realization, using character-level neural encoder-decoder systems.
For future work, while we will keep expanding our resource to include more paradigms, we will also follow the community guidelines in extending our resource to include morphological analysis and inflection examples in context.
Acknowledgements
Part of this work was done during the Workshop on Language Technology for Language Documentation and Revitalization. This material is based upon work generously supported by the National Science Foundation under grant 1761548. | DyNet |
9e1bf306658ef2972159643fdaf149c569db524b | 9e1bf306658ef2972159643fdaf149c569db524b_0 | Q: Which language family does Chatino belong to?
Text: Introduction
The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind.
The situation for endangered languages is usually even worse, as the focus of the scientific community mostly relies in language documentation. The typical endangered language documentation process typically includes the creation of language resources in the form of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored into large online linguistics archives. This process is often hindered by the so-called Transcription Bottleneck, but recent advances BIBREF0, BIBREF1 provide promising directions towards a solution for this issue.
However, language documentation and linguistic description, although extremely important itself, does not meaningfully contribute to language conservation, which aims to ensure that the language stays in use. We believe that a major avenue towards continual language use is by further creating language technologies for endangered languages, essentially elevating them to the same level as high-resource, economically or politically stronger languages.
The majority of the world's languages are categorized as synthetic, meaning that they have rich morphology, be it fusional, agglutinative, polysynthetic, or a mixture thereof. As Natural Language Processing (NLP) keeps expanding its frontiers to encompass more and more languages, modeling of the grammatical functions that guide language generation is of utmost importance. It follows, then, that the next crucial step for expanding NLP research on endangered languages is creating benchmarks for standard NLP tasks in such languages.
With this work we take a small first step towards this direction. We present a resource that allows for benchmarking two NLP tasks in San Juan Quiahije Chatino, an endangered language spoken in southern Mexico: morphological analysis and morphological inflection, with a focus on the verb morphology of the language.
We first briefly discuss the Chatino language and the intricacies of its verb morphology (§SECREF2), then describe the resource (§SECREF3), and finally present baseline results on both the morphological analysis and the inflection tasks using state-of-the-art neural models (§SECREF4). We make our resource publicly available online.
The Chatino Language
Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers.
The Chatino Language ::: Typology and Writing System
Eastern Chatino languages , including SJQ Chatino, are intensively tonal BIBREF2, BIBREF3. Tones mark both lexical and grammatical distinctions in Eastern Chatino languages.
In SJQ Chatino, there are eleven tones. Three different systems for representing tone distinctions are employed in the literature: the S-H-M-L system of BIBREF2; the numeral system of BIBREF4; and the alphabetic system of BIBREF3. The correspondences among these three systems are given in Table . For present purposes, we will use numeral representations of the second sort. The number 1 represents a high pitch, 4 represents a low pitch, and double digits represent contour tones.
The Chatino Language ::: Verb Morphology
SJQ Chatino verb inflection distinguishes four aspect/mood categories: completive (`I did'), progressive (`I am doing'), habitual (`I habitually do') and potential (`I might do'). In each of these categories, verbs inflect for three persons (first, second, third) and two numbers (singular, plural) and distinguish inclusive and exclusive categories of the first person plural (`we including you' vs `we excluding you'). Verbs can be classified into dozens of different conjugation classes. Each conjugation class involves its own tone pattern; each tone pattern is based on a series of three person/number (PN) triplets. A PN triplet [X, Y, Z] consists of three tones: tone X is employed in the third person singular as well as in all plural forms; tone Y is employed in the second person singular, and tone Z, in the third person singular. Thus, a verb's membership in a particular conjugation class entails the assignment of one tone triplet to completive forms, another to progressive forms, and a third to habitual and potential forms. The paradigm of the verb lyu1 `fall' in Table illustrates: the conjugation class to which this verb belongs entails the assignment of the triplet [1, 42, 20] to the completive, [1, 42, 32] to the progressive, and [20, 42, 32] to the habitual and potential. Verbs in other conjugation classes exhibit other triplet series.
The Resource
We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:
Person: first (1), second (2), and third (3)
Number: singular (SG) ad plural (PL)
Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)
Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).
Two examples of complete inflection tables for the verbs ndyu2 `fell from above' and lyu1 `fall' are shown in Table . Note how the first verb has the same PN triplet for all four aspect/mood categories, while the second paradigm is more representative in that it involves three different triplets (one for the completive, another for the progressive, and another for the habitual/potential). This variety is at the core of why the SJQ verb morphology is particularly interesting, and a challenging testcase for modern NLP systems.
In total, we end up with 4716 groupings (triplets) of a lemma, a tag-set, and a form; we split these groupings randomly into a training set (3774 groupings), a development set (471 groupings), and test set (471 groupings). Basic statistics of the corpus are outlined in Table 1 . Compared to all the other languages from the Unimorph project, this puts SJQ Chatino in the low- to mid-resource category, but nonetheless it is more than enough for benchmarking purposes.
Baseline Results ::: Inflectional realization
Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection," inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.
Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.
Inflection results are outlined in Table . In the `standard' setting we simply train on the pre-defined training set, achieving an exact-match accuracy of 60% over the test set. Interestingly, the data augmentation approach of BIBREF12 that hallucinates new training paradigms based on character level alignments does not heed significant improvements in accuracy (only 2 percentage points increase, cf. with more than 15 percentage points increases in other languages). These results indicate that automatic morphological inflection for low-resource tonal languages like SJQ Chatino poses a particularly challenging setting, which perhaps requires explicit handling of tone information by the model.
Baseline Results ::: Morphological Analysis
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.
Baseline Results ::: Lemmatization
Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.
The baseline results, with and without providing gold morphological tags along with the inflected form as input, are outlined in Table . We find that automatic lemmatization in SJQ Chatino achieves fairly high accuracy even with our simple baseline models (89% accuracy, $0.27$ average Levenshtein distance) and that providing the gold morphological tags provides a performance boost indicated by small improvements on both metrics. It it worth noting, though, that these results are also well below the $94--95\%$ average accuracy and $0.13$ average Levenshtein distance that lemmatization models achieved over 107 treebanks in 66 languages for the SIGMORPHON 2019 shared task BIBREF11.
Related Work
Our work builds and expands upon previous work on Indigenous languages of the Americas specifically focusing on the complexity of their morphology. Among other works similar to ours, BIBREF17 focused on the morphology of Dene verbs, BIBREF18 on Arapaho verbs, BIBREF19 on Shipibo-Konibo, and BIBREF20 on Saint Lawrence Island and Central Siberian Yupik. BIBREF21 describe an approach for elicit complete inflection paradigms, with experiments in languages like Nahuatl. Our resource is the first one for SJQ Chatino, but it also provides an exciting new data point in the computational study of morphological analysis, lemmatization, and inflection, as it is the first one in a tonal language with explicit tonal markings in the writing system. In a similar vein, the Oto-Manguean Inflectional Class Database BIBREF22 provides a valuable resource for studying the verbal morphology of Oto-Manguean languages (including a couple of other Chatino variants: Yaitepec and Zenzotepec Chatino) but not in a format suitable for computational experiments.
Conclusion
We presented a resource of 198 complete inflectional paradigms in San Juan Quiahije Chatino, which will facilitate research in computational morphological analysis and inflection for low-resource tonal languages and languages of Mesoamerica. We also provide strong baseline results on computational morphological analysis, lemmatization, and inflection realization, using character-level neural encoder-decoder systems.
For future work, while we will keep expanding our resource to include more paradigms, we will also follow the community guidelines in extending our resource to include morphological analysis and inflection examples in context.
Acknowledgements
Part of this work was done during the Workshop on Language Technology for Language Documentation and Revitalization. This material is based upon work generously supported by the National Science Foundation under grant 1761548. | the Otomanguean language family |
25b24ab1248f14a621686a57555189acc1afd49c | 25b24ab1248f14a621686a57555189acc1afd49c_0 | Q: What system is used as baseline?
Text: Introduction
The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind.
The situation for endangered languages is usually even worse, as the focus of the scientific community mostly relies in language documentation. The typical endangered language documentation process typically includes the creation of language resources in the form of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored into large online linguistics archives. This process is often hindered by the so-called Transcription Bottleneck, but recent advances BIBREF0, BIBREF1 provide promising directions towards a solution for this issue.
However, language documentation and linguistic description, although extremely important itself, does not meaningfully contribute to language conservation, which aims to ensure that the language stays in use. We believe that a major avenue towards continual language use is by further creating language technologies for endangered languages, essentially elevating them to the same level as high-resource, economically or politically stronger languages.
The majority of the world's languages are categorized as synthetic, meaning that they have rich morphology, be it fusional, agglutinative, polysynthetic, or a mixture thereof. As Natural Language Processing (NLP) keeps expanding its frontiers to encompass more and more languages, modeling of the grammatical functions that guide language generation is of utmost importance. It follows, then, that the next crucial step for expanding NLP research on endangered languages is creating benchmarks for standard NLP tasks in such languages.
With this work we take a small first step towards this direction. We present a resource that allows for benchmarking two NLP tasks in San Juan Quiahije Chatino, an endangered language spoken in southern Mexico: morphological analysis and morphological inflection, with a focus on the verb morphology of the language.
We first briefly discuss the Chatino language and the intricacies of its verb morphology (§SECREF2), then describe the resource (§SECREF3), and finally present baseline results on both the morphological analysis and the inflection tasks using state-of-the-art neural models (§SECREF4). We make our resource publicly available online.
The Chatino Language
Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers.
The Chatino Language ::: Typology and Writing System
Eastern Chatino languages , including SJQ Chatino, are intensively tonal BIBREF2, BIBREF3. Tones mark both lexical and grammatical distinctions in Eastern Chatino languages.
In SJQ Chatino, there are eleven tones. Three different systems for representing tone distinctions are employed in the literature: the S-H-M-L system of BIBREF2; the numeral system of BIBREF4; and the alphabetic system of BIBREF3. The correspondences among these three systems are given in Table . For present purposes, we will use numeral representations of the second sort. The number 1 represents a high pitch, 4 represents a low pitch, and double digits represent contour tones.
The Chatino Language ::: Verb Morphology
SJQ Chatino verb inflection distinguishes four aspect/mood categories: completive (`I did'), progressive (`I am doing'), habitual (`I habitually do') and potential (`I might do'). In each of these categories, verbs inflect for three persons (first, second, third) and two numbers (singular, plural) and distinguish inclusive and exclusive categories of the first person plural (`we including you' vs `we excluding you'). Verbs can be classified into dozens of different conjugation classes. Each conjugation class involves its own tone pattern; each tone pattern is based on a series of three person/number (PN) triplets. A PN triplet [X, Y, Z] consists of three tones: tone X is employed in the third person singular as well as in all plural forms; tone Y is employed in the second person singular, and tone Z, in the third person singular. Thus, a verb's membership in a particular conjugation class entails the assignment of one tone triplet to completive forms, another to progressive forms, and a third to habitual and potential forms. The paradigm of the verb lyu1 `fall' in Table illustrates: the conjugation class to which this verb belongs entails the assignment of the triplet [1, 42, 20] to the completive, [1, 42, 32] to the progressive, and [20, 42, 32] to the habitual and potential. Verbs in other conjugation classes exhibit other triplet series.
The Resource
We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:
Person: first (1), second (2), and third (3)
Number: singular (SG) ad plural (PL)
Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)
Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).
Two examples of complete inflection tables for the verbs ndyu2 `fell from above' and lyu1 `fall' are shown in Table . Note how the first verb has the same PN triplet for all four aspect/mood categories, while the second paradigm is more representative in that it involves three different triplets (one for the completive, another for the progressive, and another for the habitual/potential). This variety is at the core of why the SJQ verb morphology is particularly interesting, and a challenging testcase for modern NLP systems.
In total, we end up with 4716 groupings (triplets) of a lemma, a tag-set, and a form; we split these groupings randomly into a training set (3774 groupings), a development set (471 groupings), and test set (471 groupings). Basic statistics of the corpus are outlined in Table 1 . Compared to all the other languages from the Unimorph project, this puts SJQ Chatino in the low- to mid-resource category, but nonetheless it is more than enough for benchmarking purposes.
Baseline Results ::: Inflectional realization
Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection," inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.
Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.
Inflection results are outlined in Table . In the `standard' setting we simply train on the pre-defined training set, achieving an exact-match accuracy of 60% over the test set. Interestingly, the data augmentation approach of BIBREF12 that hallucinates new training paradigms based on character level alignments does not heed significant improvements in accuracy (only 2 percentage points increase, cf. with more than 15 percentage points increases in other languages). These results indicate that automatic morphological inflection for low-resource tonal languages like SJQ Chatino poses a particularly challenging setting, which perhaps requires explicit handling of tone information by the model.
Baseline Results ::: Morphological Analysis
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.
Baseline Results ::: Lemmatization
Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.
The baseline results, with and without providing gold morphological tags along with the inflected form as input, are outlined in Table . We find that automatic lemmatization in SJQ Chatino achieves fairly high accuracy even with our simple baseline models (89% accuracy, $0.27$ average Levenshtein distance) and that providing the gold morphological tags provides a performance boost indicated by small improvements on both metrics. It it worth noting, though, that these results are also well below the $94--95\%$ average accuracy and $0.13$ average Levenshtein distance that lemmatization models achieved over 107 treebanks in 66 languages for the SIGMORPHON 2019 shared task BIBREF11.
Related Work
Our work builds and expands upon previous work on Indigenous languages of the Americas specifically focusing on the complexity of their morphology. Among other works similar to ours, BIBREF17 focused on the morphology of Dene verbs, BIBREF18 on Arapaho verbs, BIBREF19 on Shipibo-Konibo, and BIBREF20 on Saint Lawrence Island and Central Siberian Yupik. BIBREF21 describe an approach for elicit complete inflection paradigms, with experiments in languages like Nahuatl. Our resource is the first one for SJQ Chatino, but it also provides an exciting new data point in the computational study of morphological analysis, lemmatization, and inflection, as it is the first one in a tonal language with explicit tonal markings in the writing system. In a similar vein, the Oto-Manguean Inflectional Class Database BIBREF22 provides a valuable resource for studying the verbal morphology of Oto-Manguean languages (including a couple of other Chatino variants: Yaitepec and Zenzotepec Chatino) but not in a format suitable for computational experiments.
Conclusion
We presented a resource of 198 complete inflectional paradigms in San Juan Quiahije Chatino, which will facilitate research in computational morphological analysis and inflection for low-resource tonal languages and languages of Mesoamerica. We also provide strong baseline results on computational morphological analysis, lemmatization, and inflection realization, using character-level neural encoder-decoder systems.
For future work, while we will keep expanding our resource to include more paradigms, we will also follow the community guidelines in extending our resource to include morphological analysis and inflection examples in context.
Acknowledgements
Part of this work was done during the Workshop on Language Technology for Language Documentation and Revitalization. This material is based upon work generously supported by the National Science Foundation under grant 1761548. | DyNet |
8486e06c03f82ebd48c7cfbaffaa76e8b899eea5 | 8486e06c03f82ebd48c7cfbaffaa76e8b899eea5_0 | Q: How was annotation done?
Text: Introduction
The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind.
The situation for endangered languages is usually even worse, as the focus of the scientific community mostly relies in language documentation. The typical endangered language documentation process typically includes the creation of language resources in the form of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored into large online linguistics archives. This process is often hindered by the so-called Transcription Bottleneck, but recent advances BIBREF0, BIBREF1 provide promising directions towards a solution for this issue.
However, language documentation and linguistic description, although extremely important itself, does not meaningfully contribute to language conservation, which aims to ensure that the language stays in use. We believe that a major avenue towards continual language use is by further creating language technologies for endangered languages, essentially elevating them to the same level as high-resource, economically or politically stronger languages.
The majority of the world's languages are categorized as synthetic, meaning that they have rich morphology, be it fusional, agglutinative, polysynthetic, or a mixture thereof. As Natural Language Processing (NLP) keeps expanding its frontiers to encompass more and more languages, modeling of the grammatical functions that guide language generation is of utmost importance. It follows, then, that the next crucial step for expanding NLP research on endangered languages is creating benchmarks for standard NLP tasks in such languages.
With this work we take a small first step towards this direction. We present a resource that allows for benchmarking two NLP tasks in San Juan Quiahije Chatino, an endangered language spoken in southern Mexico: morphological analysis and morphological inflection, with a focus on the verb morphology of the language.
We first briefly discuss the Chatino language and the intricacies of its verb morphology (§SECREF2), then describe the resource (§SECREF3), and finally present baseline results on both the morphological analysis and the inflection tasks using state-of-the-art neural models (§SECREF4). We make our resource publicly available online.
The Chatino Language
Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers.
The Chatino Language ::: Typology and Writing System
Eastern Chatino languages , including SJQ Chatino, are intensively tonal BIBREF2, BIBREF3. Tones mark both lexical and grammatical distinctions in Eastern Chatino languages.
In SJQ Chatino, there are eleven tones. Three different systems for representing tone distinctions are employed in the literature: the S-H-M-L system of BIBREF2; the numeral system of BIBREF4; and the alphabetic system of BIBREF3. The correspondences among these three systems are given in Table . For present purposes, we will use numeral representations of the second sort. The number 1 represents a high pitch, 4 represents a low pitch, and double digits represent contour tones.
The Chatino Language ::: Verb Morphology
SJQ Chatino verb inflection distinguishes four aspect/mood categories: completive (`I did'), progressive (`I am doing'), habitual (`I habitually do') and potential (`I might do'). In each of these categories, verbs inflect for three persons (first, second, third) and two numbers (singular, plural) and distinguish inclusive and exclusive categories of the first person plural (`we including you' vs `we excluding you'). Verbs can be classified into dozens of different conjugation classes. Each conjugation class involves its own tone pattern; each tone pattern is based on a series of three person/number (PN) triplets. A PN triplet [X, Y, Z] consists of three tones: tone X is employed in the third person singular as well as in all plural forms; tone Y is employed in the second person singular, and tone Z, in the third person singular. Thus, a verb's membership in a particular conjugation class entails the assignment of one tone triplet to completive forms, another to progressive forms, and a third to habitual and potential forms. The paradigm of the verb lyu1 `fall' in Table illustrates: the conjugation class to which this verb belongs entails the assignment of the triplet [1, 42, 20] to the completive, [1, 42, 32] to the progressive, and [20, 42, 32] to the habitual and potential. Verbs in other conjugation classes exhibit other triplet series.
The Resource
We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:
Person: first (1), second (2), and third (3)
Number: singular (SG) ad plural (PL)
Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)
Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).
Two examples of complete inflection tables for the verbs ndyu2 `fell from above' and lyu1 `fall' are shown in Table . Note how the first verb has the same PN triplet for all four aspect/mood categories, while the second paradigm is more representative in that it involves three different triplets (one for the completive, another for the progressive, and another for the habitual/potential). This variety is at the core of why the SJQ verb morphology is particularly interesting, and a challenging testcase for modern NLP systems.
In total, we end up with 4716 groupings (triplets) of a lemma, a tag-set, and a form; we split these groupings randomly into a training set (3774 groupings), a development set (471 groupings), and test set (471 groupings). Basic statistics of the corpus are outlined in Table 1 . Compared to all the other languages from the Unimorph project, this puts SJQ Chatino in the low- to mid-resource category, but nonetheless it is more than enough for benchmarking purposes.
Baseline Results ::: Inflectional realization
Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection," inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.
Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.
Inflection results are outlined in Table . In the `standard' setting we simply train on the pre-defined training set, achieving an exact-match accuracy of 60% over the test set. Interestingly, the data augmentation approach of BIBREF12 that hallucinates new training paradigms based on character level alignments does not heed significant improvements in accuracy (only 2 percentage points increase, cf. with more than 15 percentage points increases in other languages). These results indicate that automatic morphological inflection for low-resource tonal languages like SJQ Chatino poses a particularly challenging setting, which perhaps requires explicit handling of tone information by the model.
Baseline Results ::: Morphological Analysis
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.
Baseline Results ::: Lemmatization
Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.
The baseline results, with and without providing gold morphological tags along with the inflected form as input, are outlined in Table . We find that automatic lemmatization in SJQ Chatino achieves fairly high accuracy even with our simple baseline models (89% accuracy, $0.27$ average Levenshtein distance) and that providing the gold morphological tags provides a performance boost indicated by small improvements on both metrics. It it worth noting, though, that these results are also well below the $94--95\%$ average accuracy and $0.13$ average Levenshtein distance that lemmatization models achieved over 107 treebanks in 66 languages for the SIGMORPHON 2019 shared task BIBREF11.
Related Work
Our work builds and expands upon previous work on Indigenous languages of the Americas specifically focusing on the complexity of their morphology. Among other works similar to ours, BIBREF17 focused on the morphology of Dene verbs, BIBREF18 on Arapaho verbs, BIBREF19 on Shipibo-Konibo, and BIBREF20 on Saint Lawrence Island and Central Siberian Yupik. BIBREF21 describe an approach for elicit complete inflection paradigms, with experiments in languages like Nahuatl. Our resource is the first one for SJQ Chatino, but it also provides an exciting new data point in the computational study of morphological analysis, lemmatization, and inflection, as it is the first one in a tonal language with explicit tonal markings in the writing system. In a similar vein, the Oto-Manguean Inflectional Class Database BIBREF22 provides a valuable resource for studying the verbal morphology of Oto-Manguean languages (including a couple of other Chatino variants: Yaitepec and Zenzotepec Chatino) but not in a format suitable for computational experiments.
Conclusion
We presented a resource of 198 complete inflectional paradigms in San Juan Quiahije Chatino, which will facilitate research in computational morphological analysis and inflection for low-resource tonal languages and languages of Mesoamerica. We also provide strong baseline results on computational morphological analysis, lemmatization, and inflection realization, using character-level neural encoder-decoder systems.
For future work, while we will keep expanding our resource to include more paradigms, we will also follow the community guidelines in extending our resource to include morphological analysis and inflection examples in context.
Acknowledgements
Part of this work was done during the Workshop on Language Technology for Language Documentation and Revitalization. This material is based upon work generously supported by the National Science Foundation under grant 1761548. | hand-curated collection of complete inflection tables for 198 lemmata |
27f575e90487ef68298cfb6452683bb977e39e43 | 27f575e90487ef68298cfb6452683bb977e39e43_0 | Q: How was the data collected?
Text: Introduction
The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind.
The situation for endangered languages is usually even worse, as the focus of the scientific community mostly relies in language documentation. The typical endangered language documentation process typically includes the creation of language resources in the form of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored into large online linguistics archives. This process is often hindered by the so-called Transcription Bottleneck, but recent advances BIBREF0, BIBREF1 provide promising directions towards a solution for this issue.
However, language documentation and linguistic description, although extremely important itself, does not meaningfully contribute to language conservation, which aims to ensure that the language stays in use. We believe that a major avenue towards continual language use is by further creating language technologies for endangered languages, essentially elevating them to the same level as high-resource, economically or politically stronger languages.
The majority of the world's languages are categorized as synthetic, meaning that they have rich morphology, be it fusional, agglutinative, polysynthetic, or a mixture thereof. As Natural Language Processing (NLP) keeps expanding its frontiers to encompass more and more languages, modeling of the grammatical functions that guide language generation is of utmost importance. It follows, then, that the next crucial step for expanding NLP research on endangered languages is creating benchmarks for standard NLP tasks in such languages.
With this work we take a small first step towards this direction. We present a resource that allows for benchmarking two NLP tasks in San Juan Quiahije Chatino, an endangered language spoken in southern Mexico: morphological analysis and morphological inflection, with a focus on the verb morphology of the language.
We first briefly discuss the Chatino language and the intricacies of its verb morphology (§SECREF2), then describe the resource (§SECREF3), and finally present baseline results on both the morphological analysis and the inflection tasks using state-of-the-art neural models (§SECREF4). We make our resource publicly available online.
The Chatino Language
Chatino is a group of languages spoken in Oaxaca, Mexico. Together with the Zapotec language group, the Chatino languages form the Zapotecan branch of the Otomanguean language family. There are three main Chatino languages: Zenzontepec Chatino (ZEN, ISO 639-2 code czn), Tataltepec Chatino (TAT, cta), and Eastern Chatino (ISO 639-2 ctp, cya, ctz, and cly) (E.Cruz 2011 and Campbell 2011). San Juan Quiahije Chatino (SJQ), the language of the focus of this study, belongs to Eastern Chatino, and is used by about 3000 speakers.
The Chatino Language ::: Typology and Writing System
Eastern Chatino languages , including SJQ Chatino, are intensively tonal BIBREF2, BIBREF3. Tones mark both lexical and grammatical distinctions in Eastern Chatino languages.
In SJQ Chatino, there are eleven tones. Three different systems for representing tone distinctions are employed in the literature: the S-H-M-L system of BIBREF2; the numeral system of BIBREF4; and the alphabetic system of BIBREF3. The correspondences among these three systems are given in Table . For present purposes, we will use numeral representations of the second sort. The number 1 represents a high pitch, 4 represents a low pitch, and double digits represent contour tones.
The Chatino Language ::: Verb Morphology
SJQ Chatino verb inflection distinguishes four aspect/mood categories: completive (`I did'), progressive (`I am doing'), habitual (`I habitually do') and potential (`I might do'). In each of these categories, verbs inflect for three persons (first, second, third) and two numbers (singular, plural) and distinguish inclusive and exclusive categories of the first person plural (`we including you' vs `we excluding you'). Verbs can be classified into dozens of different conjugation classes. Each conjugation class involves its own tone pattern; each tone pattern is based on a series of three person/number (PN) triplets. A PN triplet [X, Y, Z] consists of three tones: tone X is employed in the third person singular as well as in all plural forms; tone Y is employed in the second person singular, and tone Z, in the third person singular. Thus, a verb's membership in a particular conjugation class entails the assignment of one tone triplet to completive forms, another to progressive forms, and a third to habitual and potential forms. The paradigm of the verb lyu1 `fall' in Table illustrates: the conjugation class to which this verb belongs entails the assignment of the triplet [1, 42, 20] to the completive, [1, 42, 32] to the progressive, and [20, 42, 32] to the habitual and potential. Verbs in other conjugation classes exhibit other triplet series.
The Resource
We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:
Person: first (1), second (2), and third (3)
Number: singular (SG) ad plural (PL)
Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)
Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).
Two examples of complete inflection tables for the verbs ndyu2 `fell from above' and lyu1 `fall' are shown in Table . Note how the first verb has the same PN triplet for all four aspect/mood categories, while the second paradigm is more representative in that it involves three different triplets (one for the completive, another for the progressive, and another for the habitual/potential). This variety is at the core of why the SJQ verb morphology is particularly interesting, and a challenging testcase for modern NLP systems.
In total, we end up with 4716 groupings (triplets) of a lemma, a tag-set, and a form; we split these groupings randomly into a training set (3774 groupings), a development set (471 groupings), and test set (471 groupings). Basic statistics of the corpus are outlined in Table 1 . Compared to all the other languages from the Unimorph project, this puts SJQ Chatino in the low- to mid-resource category, but nonetheless it is more than enough for benchmarking purposes.
Baseline Results ::: Inflectional realization
Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection," inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.
Morphological inflection has been thoroughly studied in monolingual high resource settings, especially through the recent SIGMORPHON challenges BIBREF8, BIBREF9, BIBREF10, with the latest iteration focusing more on low-resource settings, utilizing cross-lingual transfer BIBREF11. We use the guidelines of the state-of-the-art approach of BIBREF12 that achieved the highest inflection accuracy in the latest SIGMORPHON 2019 morphological inflection shared task. Our models are implemented in DyNet BIBREF13.
Inflection results are outlined in Table . In the `standard' setting we simply train on the pre-defined training set, achieving an exact-match accuracy of 60% over the test set. Interestingly, the data augmentation approach of BIBREF12 that hallucinates new training paradigms based on character level alignments does not heed significant improvements in accuracy (only 2 percentage points increase, cf. with more than 15 percentage points increases in other languages). These results indicate that automatic morphological inflection for low-resource tonal languages like SJQ Chatino poses a particularly challenging setting, which perhaps requires explicit handling of tone information by the model.
Baseline Results ::: Morphological Analysis
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.
Baseline Results ::: Lemmatization
Lemmatization is the task of retrieving the underlying lemma from which an inflected form was derived. Although in some languages the lemma is distinct from all forms, in SJQ Chatino the lemma is defined as the completive third-person singular form. As a computational task, lemmatization entails producing the lemma given an inflected form (and possibly, given a set of morphological tags describing the input form). Popular approaches tackle it as a character-level edit sequence generation task BIBREF15, or as a character-level sequence-to-sequence task BIBREF16. For our baseline lemmatization systems we follow the latter approach. We trained a character level encoder-decoder model, similar to the above-mentioned inflection system, implemented in DyNet.
The baseline results, with and without providing gold morphological tags along with the inflected form as input, are outlined in Table . We find that automatic lemmatization in SJQ Chatino achieves fairly high accuracy even with our simple baseline models (89% accuracy, $0.27$ average Levenshtein distance) and that providing the gold morphological tags provides a performance boost indicated by small improvements on both metrics. It it worth noting, though, that these results are also well below the $94--95\%$ average accuracy and $0.13$ average Levenshtein distance that lemmatization models achieved over 107 treebanks in 66 languages for the SIGMORPHON 2019 shared task BIBREF11.
Related Work
Our work builds and expands upon previous work on Indigenous languages of the Americas specifically focusing on the complexity of their morphology. Among other works similar to ours, BIBREF17 focused on the morphology of Dene verbs, BIBREF18 on Arapaho verbs, BIBREF19 on Shipibo-Konibo, and BIBREF20 on Saint Lawrence Island and Central Siberian Yupik. BIBREF21 describe an approach for elicit complete inflection paradigms, with experiments in languages like Nahuatl. Our resource is the first one for SJQ Chatino, but it also provides an exciting new data point in the computational study of morphological analysis, lemmatization, and inflection, as it is the first one in a tonal language with explicit tonal markings in the writing system. In a similar vein, the Oto-Manguean Inflectional Class Database BIBREF22 provides a valuable resource for studying the verbal morphology of Oto-Manguean languages (including a couple of other Chatino variants: Yaitepec and Zenzotepec Chatino) but not in a format suitable for computational experiments.
Conclusion
We presented a resource of 198 complete inflectional paradigms in San Juan Quiahije Chatino, which will facilitate research in computational morphological analysis and inflection for low-resource tonal languages and languages of Mesoamerica. We also provide strong baseline results on computational morphological analysis, lemmatization, and inflection realization, using character-level neural encoder-decoder systems.
For future work, while we will keep expanding our resource to include more paradigms, we will also follow the community guidelines in extending our resource to include morphological analysis and inflection examples in context.
Acknowledgements
Part of this work was done during the Workshop on Language Technology for Language Documentation and Revitalization. This material is based upon work generously supported by the National Science Foundation under grant 1761548. | Unanswerable |
157b9f6f8fb5d370fa23df31de24ae7efb75d6f3 | 157b9f6f8fb5d370fa23df31de24ae7efb75d6f3_0 | Q: How do their results compare against other competitors in the PAN 2017 shared task on Author Profiling?
Text: Introduction
With the rise of social media, more and more people acquire some kind of on-line presence or persona, mostly made up of images and text. This means that these people can be considered authors, and thus that we can profile them as such. Profiling authors, that is, inferring personal characteristics from text, can reveal many things, such as their age, gender, personality traits, location, even though writers might not consciously choose to put indicators of those characteristics in the text. The uses for this are obvious, for cases like targeted advertising and other use cases, such as security, but it is also interesting from a linguistic standpoint.
In the shared task on author profiling BIBREF0 , organised within the PAN framework BIBREF1 , the aim is to infer Twitter users' gender and language variety from their tweets in four different languages: English, Spanish, Arabic, and Portuguese. Gender consists of a binary classification (male/female), whereas language variety differs per language, from 2 varieties for Portuguese (Brazilian and Portugal) to 7 varieties for Spanish (Argentina, Chile, Colombia, Mexico, Peru, Spain, Venezuela). The challenge is thus to classify users along two very different axes, and in four highly different languages – forcing participants to either build models that can capture these traits very generally (language-independent) or tailor-make models for each language or subtask.
Even when looking at the two tasks separately, it looks like the very same features could be reliable clues for classification. Indeed, for both profiling authors on Twitter as well as for discriminating between similar languages, word and character n-grams have proved to be the strongest predictors of gender as well as language varieties. For language varieties discrimination, the systems that performed best at the DSL shared tasks in 2016 (on test set B, i.e. social media) used word/character n-grams, independently of the algorithm BIBREF2 . The crucial contribution of these features was also observed by BIBREF3 , BIBREF4 , who participated in the 2017 DSL shared task with the two best performing systems. For author profiling, it has been shown that tf-idf weighted n-gram features, both in terms of characters and words, are very successful in capturing especially gender distinctions BIBREF5 . If different aspects such as language variety and gender of a speaker on Twitter might be captured by the same features, can we build a single model that will characterise both aspects at once?
In the context of the PAN 2017 competition on user profiling we therefore experimented with enriching a basic character and word n-gram model by including a variety of features that we believed should work. We also tried to view the task jointly and model the two problems as one single label, but single modelling worked best.
In this paper we report how our final submitted system works, and provide some general data analysis, but we also devote substantial space to describing what we tried (under which motivations), as we believe this is very informative towards future developments of author profiling systems.
Final System
After an extensive grid-search we submitted as our final run, a simple SVM system (using the scikit-learn LinearSVM implementation) that uses character 3- to 5-grams and word 1- to 2-grams with tf-idf weighting with sublinear term frequency scaling, where instead of the standard term frequency the following is used:
INLINEFORM0
We ran the grid search over both tasks and all languages on a 64-core machine with 1 TB RAM (see Table TABREF2 for the list of values over which the grid search was performed). The full search took about a day to complete. In particular, using min_df=2 (i.e. excluding all terms that are used by only one author) seems to have a strong positive effect and greatly reduces the feature size as there are many words that appear only once. The different optimal parameters for different languages provided only a slight performance boost for each language. We decided that this increase was too small to be significant, so we decided to use a single parameter set for all languages and both tasks.
Data Analysis
The training dataset provided consist of 11400 sets of tweets, each set representing a single author. The target labels are evenly distributed across variety and gender. The labels for the gender classification task are `male' and `female'. Table TABREF4 shows the labels for the language variation task and also shows the data distribution across languages.
We produced two visualisations, one per label (i.e. variety and gender), in order to gain some insights that could help the feature engineering process. For the variety label we trained a decision tree classifier using word unigrams: although the performance is poor (accuracy score of 0.63) this setup has the benefit of being easy to interpret: Figure FIGREF3 shows which features are used for the first splits of the tree.
We also created a visualisation of the English dataset using the tool described in BIBREF6 , and comparing the most frequent words used by males to those used by females. The visualisation shown in Figure SECREF6 indicates several interesting things about the gendered use of language. The words used often by males and very seldom by females are often sport-related, and include words such as “league”, and “chelsea”. There are several emojis that are used frequently by females and infrequently by males, e.g. “”, “”, as well as words like “kitten”, “mom”, “sister” and “chocolate”. In the top right of the visualisation we see words like “trump” and “sleep”, which indicates that these words are used very frequently, but equally so by both genders. This also shows that distinguishing words include both time-specific ones, like “gilmore” and “imacelebrityau”, and general words from everyday life, which are less likely to be subject to time-specific trends, like “player”, and “chocolate”.
Alternative Features and Methods: An Analysis of Negative Results
This section is meant to highlight all of the potential contributions to the systems which turned out to be detrimental to performance, when compared to the simpler system that we have described in Section SECREF2 . We divide our attempts according to the different ways we attempted to enhance performance: manipulating the data itself (adding more, and changing preprocessing), using a large variety of features, and changing strategies in modelling the problem by using different algorithms and paradigms. All reported results are on the PAN 2017 training data using five-fold cross-validation, unless otherwise specified.
Supplementary Data and Features
We extended the training dataset by adding data and gender labels from the PAN 16 Author Profiling shared task BIBREF5 . However, the additional data consistently resulted in lower cross-validation scores than when using only the training data provided with the PAN 17 task. One possible explanation for this is that our unigram model captures aspects that are tied specifically to the PAN 17 dataset, because it contains topics that may not be present in datasets that were collected in a different time period. To confirm this, we attempted to train on English data from PAN 17 and predict gender labels for the English data from PAN 16, as well as vice versa. Training on the PAN 16 data resulted in an accuracy score of 0.754 for the PAN 17 task, and training on PAN 17 gave an accuracy score of 0.70 for PAN 16, both scores significantly lower than cross-validated results on data from a single year.
We attempted to classify the English tweets by Gender using only the data collected by BIBREF7 . This dataset consists of aggregated word counts by gender for about 14,000 Twitter users and 9 million Tweets. We used this data to calculate whether each word in our dataset was a `male' word (used more by males), or a `female' word, and classified users as male or female based on a majority count of the words they used. Using this method we achieved 71.2 percent accuracy for the English gender data, showing that this simple method can provide a reasonable baseline to the gender task.
We experimented with different tokenization techniques for different languages, but our average results did not improve, so we decided to use the default scikit-learn tokenizer.
We tried adding POS-tags to the English tweets using the spaCy tagger: compared to the model using unigrams only the performances dropped slightly for gender and a bit more for variety:
It is not clear whether the missed increase in performance is due to the fact that the data are not normal (i.e. the tokenizer is not Twitter specific) or to the fact that POS tags confuse the classifier. Considering the results we decided not to include a POS-tagger in the final system.
()
In April 2015, SwiftKey did an extensive report on emoji use by country. They discovered that emoji use varies across languages and across language varieties. For example, they found that Australians use double the average amount of alcohol-themed emoji and use more junk food and holiday emoji than anywhere else in the world.
We tried to leverage these findings but the results were disappointing. We used a list of emojis as a vocabulary for the td/idf vectorizer. Encouraged by the results of the SwiftKey report, we tried first to use emojis as the only vocabulary and although the results are above the baseline and also quite high considering the type of features, they were still below the simple unigram model. Adding emojis as extra features to the unigram model also did not provide any improvement.
Since emojis are used across languages we built a single model for the four languages. We trained the model for the gender label on English, Portuguese and Arabic and tested it on Spanish: the system scored 0.67 in accuracy.
We looked at accuracy scores for the English gender and variety data more closely. We tried different representations of the tweet texts, to see what kind of words were most predictive of variety and gender. Specifically, we look at using only words that start with an uppercase letter, only words that start with a lowercase letter, only Twitter handles (words that start with an "@") and all the text excluding the handles.
It is interesting that the accuracies are so high although we are using only a basic unigram model, without looking at the character n-grams that we include in our final model. Representing each text only by the Twitter handles used in that text results in 0.77 accuracy for variety, probably because users tend to interact with other users who are in the same geographic area. However, excluding handles from the texts barely decreases performance for the variety task, showing that while the handles can be discriminative, they are not necessary for this task. It is also interesting to note that for this dataset, looking only at words beginning with an uppercase character results in nearly the same score for the Gender task as we get when using all of the available text, while using only lowercase words decreases performance. The opposite is true for the variety task, where using lowercase-only words results in as good performance as using all the text, but using only uppercase words decreases accuracy by over 10 percent.
We tried using the counts of geographical names related to the language varieties were as a feature. We also treated this list of locations as vocabulary for our model. Both these approaches did not improve our model.
We then tried enriching the data to improve the Unigram model. For each of the language varieties, we obtained 100 geographical location names, representing the cities with the most inhabitants. When this location was mentioned in the tweet, the language variety the location was part of was added to the tweet.
We attempted to use Twitter handles in a similar manner. The 100 most-followed Twitter users per language variety were found and the language variety was added to the text when one of its popular Twitter users was mentioned.
Unfortunately, this method did not improve our model. We suspect that the information is being captured by the n-gram model, which could explain why this did not improve performance.
We have tried the partial setup of last year's winning system, GronUP BIBREF8 , with the distinction that we had to classify language variety instead of age groups. We have excluded the features that are language-dependent (i.e. pos-tagging and misspelling/typos), and experimented with various feature combinations of the rest while keeping word and character n-grams the same. We achieved average accuracy from 0.810 to 0.830, which is clearly lower than our simple final model.
Modelling
We tried to build a single model that predicts at the same time both the language variety and the gender of each user: as expected (since the task is harder) the performance goes down when compared to a model trained independently on each label. However, as highlighted in Table TABREF21 , the results are still surprisingly high. To train the system we simply merged the two labels.
We experimented with Facebook's FastText system, which is an out-of-the-box supervised learning classifier BIBREF9 . We used only the data for the English gender task, trying both tweet-level and author-level classification. We pre-processed all text with the NLTK Tweet Tokenizer and used the classification-example script provided with the FastText code base. Training on 3,000 authors and testing on 600 authors gave an accuracy score of 0.64. Changing the FastText parameters such as number of epochs, word n-grams, and learning rate showed no improvement. We achieved an accuracy on 0.79 when we attempted to classify on a per-tweet basis (300,000 tweets for training and 85,071 for test), but this is an easier task as some authors are split over the training and test sets. There are various ways to summarise per-tweet predictions into author-predictions, but we did not experiment further as it seemed that the SVM system worked better for the amount of data we have.
In the final system we used the SVM classifier because it outperformed all the others that we tried. Table TABREF23 highlights the results.
Results on Test Data
For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). For the global scores, all languages are combined. We present finer-grained scores showing the breakdown per language in Table TABREF24 . We compare our gender and variety accuracies against the LDR-baseline BIBREF10 , a low dimensionality representation especially tailored to language variety identification, provided by the organisers. The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline).
Results are broken down per language, and are summarised as both joint and average scores. The joint score is is the percentage of texts for which both gender and variety were predicted correctly at the same time. The average is calculated as the mean over all languages.
N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task.
Conclusion
We conclude that, for the current author profiling task, a seemingly simple system using word and character n-grams and an SVM classifier proves very hard to beat. Indeed, N-GrAM turned out to be the best-performing out of the 22 systems submitted in this shared task. Using additional training data, `smart' features, and hand-crafted resources hurts rather than helps performance. A possible lesson to take from this would be that manually crafting features serves only to hinder a machine learning algorithm's ability to find patterns in a dataset, and perhaps it is better to focus one's efforts on parameter optimisation instead of feature engineering.
However, we believe that this is too strong a conclusion to draw from this limited study, since several factors specific to this setting need to be taken into account. For one, a support vector machine clearly outperforms other classifiers, but this does not mean that this is an inherently more powerful. Rather, we expect that an SVM is the best choice for the given amount of training data, but with more training data, a neural network-based approach would achieve better results.
Regarding the frustrating lack of benefit from more advanced features than n-grams, a possible explanation comes from a closer inspection of the data. Both the decision tree model (see Figure FIGREF3 ) and the data visualisation (see Figure SECREF6 ) give us an insight in the most discriminating features in the dataset. In the case of language variety, we see that place names can be informative features, and could therefore be used as a proxy for geographical location, which in turn serves as a proxy for language variety. Adding place names explicitly to our model did not yield performance improvements, which we take to indicate that this information is already captured by n-gram features. Whether and how geographical information in the text can be useful in identifying language variety, is a matter for future research.
In the case of gender, many useful features are ones that are highly specific to the Twitter platform (#iconnecthearts), time (cruz), and topics (pbsnewshour) in this dataset, which we suspect would not carry over well to other datasets, but provide high accuracy in this case. Conversely, features designed to capture gender in a more general sense do not yield any benefit over the more specific features, although they would likely be useful for a robust, cross-dataset system. These hypotheses could be assessed in the future by testing author profiling systems in a cross-platform, cross-time setting.
Scatter plot of terms commonly used by male and female English speakers. | They achieved best result in the PAN 2017 shared task with accuracy for Variety prediction task 0.0013 more than the 2nd best baseline, accuracy for Gender prediction task 0.0029 more than 2nd best baseline and accuracy for Joint prediction task 0.0101 more than the 2nd best baseline |
9bcc1df7ad103c7a21d69761c452ad3cd2951bda | 9bcc1df7ad103c7a21d69761c452ad3cd2951bda_0 | Q: On which task does do model do worst?
Text: Introduction
With the rise of social media, more and more people acquire some kind of on-line presence or persona, mostly made up of images and text. This means that these people can be considered authors, and thus that we can profile them as such. Profiling authors, that is, inferring personal characteristics from text, can reveal many things, such as their age, gender, personality traits, location, even though writers might not consciously choose to put indicators of those characteristics in the text. The uses for this are obvious, for cases like targeted advertising and other use cases, such as security, but it is also interesting from a linguistic standpoint.
In the shared task on author profiling BIBREF0 , organised within the PAN framework BIBREF1 , the aim is to infer Twitter users' gender and language variety from their tweets in four different languages: English, Spanish, Arabic, and Portuguese. Gender consists of a binary classification (male/female), whereas language variety differs per language, from 2 varieties for Portuguese (Brazilian and Portugal) to 7 varieties for Spanish (Argentina, Chile, Colombia, Mexico, Peru, Spain, Venezuela). The challenge is thus to classify users along two very different axes, and in four highly different languages – forcing participants to either build models that can capture these traits very generally (language-independent) or tailor-make models for each language or subtask.
Even when looking at the two tasks separately, it looks like the very same features could be reliable clues for classification. Indeed, for both profiling authors on Twitter as well as for discriminating between similar languages, word and character n-grams have proved to be the strongest predictors of gender as well as language varieties. For language varieties discrimination, the systems that performed best at the DSL shared tasks in 2016 (on test set B, i.e. social media) used word/character n-grams, independently of the algorithm BIBREF2 . The crucial contribution of these features was also observed by BIBREF3 , BIBREF4 , who participated in the 2017 DSL shared task with the two best performing systems. For author profiling, it has been shown that tf-idf weighted n-gram features, both in terms of characters and words, are very successful in capturing especially gender distinctions BIBREF5 . If different aspects such as language variety and gender of a speaker on Twitter might be captured by the same features, can we build a single model that will characterise both aspects at once?
In the context of the PAN 2017 competition on user profiling we therefore experimented with enriching a basic character and word n-gram model by including a variety of features that we believed should work. We also tried to view the task jointly and model the two problems as one single label, but single modelling worked best.
In this paper we report how our final submitted system works, and provide some general data analysis, but we also devote substantial space to describing what we tried (under which motivations), as we believe this is very informative towards future developments of author profiling systems.
Final System
After an extensive grid-search we submitted as our final run, a simple SVM system (using the scikit-learn LinearSVM implementation) that uses character 3- to 5-grams and word 1- to 2-grams with tf-idf weighting with sublinear term frequency scaling, where instead of the standard term frequency the following is used:
INLINEFORM0
We ran the grid search over both tasks and all languages on a 64-core machine with 1 TB RAM (see Table TABREF2 for the list of values over which the grid search was performed). The full search took about a day to complete. In particular, using min_df=2 (i.e. excluding all terms that are used by only one author) seems to have a strong positive effect and greatly reduces the feature size as there are many words that appear only once. The different optimal parameters for different languages provided only a slight performance boost for each language. We decided that this increase was too small to be significant, so we decided to use a single parameter set for all languages and both tasks.
Data Analysis
The training dataset provided consist of 11400 sets of tweets, each set representing a single author. The target labels are evenly distributed across variety and gender. The labels for the gender classification task are `male' and `female'. Table TABREF4 shows the labels for the language variation task and also shows the data distribution across languages.
We produced two visualisations, one per label (i.e. variety and gender), in order to gain some insights that could help the feature engineering process. For the variety label we trained a decision tree classifier using word unigrams: although the performance is poor (accuracy score of 0.63) this setup has the benefit of being easy to interpret: Figure FIGREF3 shows which features are used for the first splits of the tree.
We also created a visualisation of the English dataset using the tool described in BIBREF6 , and comparing the most frequent words used by males to those used by females. The visualisation shown in Figure SECREF6 indicates several interesting things about the gendered use of language. The words used often by males and very seldom by females are often sport-related, and include words such as “league”, and “chelsea”. There are several emojis that are used frequently by females and infrequently by males, e.g. “”, “”, as well as words like “kitten”, “mom”, “sister” and “chocolate”. In the top right of the visualisation we see words like “trump” and “sleep”, which indicates that these words are used very frequently, but equally so by both genders. This also shows that distinguishing words include both time-specific ones, like “gilmore” and “imacelebrityau”, and general words from everyday life, which are less likely to be subject to time-specific trends, like “player”, and “chocolate”.
Alternative Features and Methods: An Analysis of Negative Results
This section is meant to highlight all of the potential contributions to the systems which turned out to be detrimental to performance, when compared to the simpler system that we have described in Section SECREF2 . We divide our attempts according to the different ways we attempted to enhance performance: manipulating the data itself (adding more, and changing preprocessing), using a large variety of features, and changing strategies in modelling the problem by using different algorithms and paradigms. All reported results are on the PAN 2017 training data using five-fold cross-validation, unless otherwise specified.
Supplementary Data and Features
We extended the training dataset by adding data and gender labels from the PAN 16 Author Profiling shared task BIBREF5 . However, the additional data consistently resulted in lower cross-validation scores than when using only the training data provided with the PAN 17 task. One possible explanation for this is that our unigram model captures aspects that are tied specifically to the PAN 17 dataset, because it contains topics that may not be present in datasets that were collected in a different time period. To confirm this, we attempted to train on English data from PAN 17 and predict gender labels for the English data from PAN 16, as well as vice versa. Training on the PAN 16 data resulted in an accuracy score of 0.754 for the PAN 17 task, and training on PAN 17 gave an accuracy score of 0.70 for PAN 16, both scores significantly lower than cross-validated results on data from a single year.
We attempted to classify the English tweets by Gender using only the data collected by BIBREF7 . This dataset consists of aggregated word counts by gender for about 14,000 Twitter users and 9 million Tweets. We used this data to calculate whether each word in our dataset was a `male' word (used more by males), or a `female' word, and classified users as male or female based on a majority count of the words they used. Using this method we achieved 71.2 percent accuracy for the English gender data, showing that this simple method can provide a reasonable baseline to the gender task.
We experimented with different tokenization techniques for different languages, but our average results did not improve, so we decided to use the default scikit-learn tokenizer.
We tried adding POS-tags to the English tweets using the spaCy tagger: compared to the model using unigrams only the performances dropped slightly for gender and a bit more for variety:
It is not clear whether the missed increase in performance is due to the fact that the data are not normal (i.e. the tokenizer is not Twitter specific) or to the fact that POS tags confuse the classifier. Considering the results we decided not to include a POS-tagger in the final system.
()
In April 2015, SwiftKey did an extensive report on emoji use by country. They discovered that emoji use varies across languages and across language varieties. For example, they found that Australians use double the average amount of alcohol-themed emoji and use more junk food and holiday emoji than anywhere else in the world.
We tried to leverage these findings but the results were disappointing. We used a list of emojis as a vocabulary for the td/idf vectorizer. Encouraged by the results of the SwiftKey report, we tried first to use emojis as the only vocabulary and although the results are above the baseline and also quite high considering the type of features, they were still below the simple unigram model. Adding emojis as extra features to the unigram model also did not provide any improvement.
Since emojis are used across languages we built a single model for the four languages. We trained the model for the gender label on English, Portuguese and Arabic and tested it on Spanish: the system scored 0.67 in accuracy.
We looked at accuracy scores for the English gender and variety data more closely. We tried different representations of the tweet texts, to see what kind of words were most predictive of variety and gender. Specifically, we look at using only words that start with an uppercase letter, only words that start with a lowercase letter, only Twitter handles (words that start with an "@") and all the text excluding the handles.
It is interesting that the accuracies are so high although we are using only a basic unigram model, without looking at the character n-grams that we include in our final model. Representing each text only by the Twitter handles used in that text results in 0.77 accuracy for variety, probably because users tend to interact with other users who are in the same geographic area. However, excluding handles from the texts barely decreases performance for the variety task, showing that while the handles can be discriminative, they are not necessary for this task. It is also interesting to note that for this dataset, looking only at words beginning with an uppercase character results in nearly the same score for the Gender task as we get when using all of the available text, while using only lowercase words decreases performance. The opposite is true for the variety task, where using lowercase-only words results in as good performance as using all the text, but using only uppercase words decreases accuracy by over 10 percent.
We tried using the counts of geographical names related to the language varieties were as a feature. We also treated this list of locations as vocabulary for our model. Both these approaches did not improve our model.
We then tried enriching the data to improve the Unigram model. For each of the language varieties, we obtained 100 geographical location names, representing the cities with the most inhabitants. When this location was mentioned in the tweet, the language variety the location was part of was added to the tweet.
We attempted to use Twitter handles in a similar manner. The 100 most-followed Twitter users per language variety were found and the language variety was added to the text when one of its popular Twitter users was mentioned.
Unfortunately, this method did not improve our model. We suspect that the information is being captured by the n-gram model, which could explain why this did not improve performance.
We have tried the partial setup of last year's winning system, GronUP BIBREF8 , with the distinction that we had to classify language variety instead of age groups. We have excluded the features that are language-dependent (i.e. pos-tagging and misspelling/typos), and experimented with various feature combinations of the rest while keeping word and character n-grams the same. We achieved average accuracy from 0.810 to 0.830, which is clearly lower than our simple final model.
Modelling
We tried to build a single model that predicts at the same time both the language variety and the gender of each user: as expected (since the task is harder) the performance goes down when compared to a model trained independently on each label. However, as highlighted in Table TABREF21 , the results are still surprisingly high. To train the system we simply merged the two labels.
We experimented with Facebook's FastText system, which is an out-of-the-box supervised learning classifier BIBREF9 . We used only the data for the English gender task, trying both tweet-level and author-level classification. We pre-processed all text with the NLTK Tweet Tokenizer and used the classification-example script provided with the FastText code base. Training on 3,000 authors and testing on 600 authors gave an accuracy score of 0.64. Changing the FastText parameters such as number of epochs, word n-grams, and learning rate showed no improvement. We achieved an accuracy on 0.79 when we attempted to classify on a per-tweet basis (300,000 tweets for training and 85,071 for test), but this is an easier task as some authors are split over the training and test sets. There are various ways to summarise per-tweet predictions into author-predictions, but we did not experiment further as it seemed that the SVM system worked better for the amount of data we have.
In the final system we used the SVM classifier because it outperformed all the others that we tried. Table TABREF23 highlights the results.
Results on Test Data
For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). For the global scores, all languages are combined. We present finer-grained scores showing the breakdown per language in Table TABREF24 . We compare our gender and variety accuracies against the LDR-baseline BIBREF10 , a low dimensionality representation especially tailored to language variety identification, provided by the organisers. The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline).
Results are broken down per language, and are summarised as both joint and average scores. The joint score is is the percentage of texts for which both gender and variety were predicted correctly at the same time. The average is calculated as the mean over all languages.
N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task.
Conclusion
We conclude that, for the current author profiling task, a seemingly simple system using word and character n-grams and an SVM classifier proves very hard to beat. Indeed, N-GrAM turned out to be the best-performing out of the 22 systems submitted in this shared task. Using additional training data, `smart' features, and hand-crafted resources hurts rather than helps performance. A possible lesson to take from this would be that manually crafting features serves only to hinder a machine learning algorithm's ability to find patterns in a dataset, and perhaps it is better to focus one's efforts on parameter optimisation instead of feature engineering.
However, we believe that this is too strong a conclusion to draw from this limited study, since several factors specific to this setting need to be taken into account. For one, a support vector machine clearly outperforms other classifiers, but this does not mean that this is an inherently more powerful. Rather, we expect that an SVM is the best choice for the given amount of training data, but with more training data, a neural network-based approach would achieve better results.
Regarding the frustrating lack of benefit from more advanced features than n-grams, a possible explanation comes from a closer inspection of the data. Both the decision tree model (see Figure FIGREF3 ) and the data visualisation (see Figure SECREF6 ) give us an insight in the most discriminating features in the dataset. In the case of language variety, we see that place names can be informative features, and could therefore be used as a proxy for geographical location, which in turn serves as a proxy for language variety. Adding place names explicitly to our model did not yield performance improvements, which we take to indicate that this information is already captured by n-gram features. Whether and how geographical information in the text can be useful in identifying language variety, is a matter for future research.
In the case of gender, many useful features are ones that are highly specific to the Twitter platform (#iconnecthearts), time (cruz), and topics (pbsnewshour) in this dataset, which we suspect would not carry over well to other datasets, but provide high accuracy in this case. Conversely, features designed to capture gender in a more general sense do not yield any benefit over the more specific features, although they would likely be useful for a robust, cross-dataset system. These hypotheses could be assessed in the future by testing author profiling systems in a cross-platform, cross-time setting.
Scatter plot of terms commonly used by male and female English speakers. | Gender prediction task |
8427988488b5ecdbe4b57b3813b3f981b07f53a5 | 8427988488b5ecdbe4b57b3813b3f981b07f53a5_0 | Q: On which task does do model do best?
Text: Introduction
With the rise of social media, more and more people acquire some kind of on-line presence or persona, mostly made up of images and text. This means that these people can be considered authors, and thus that we can profile them as such. Profiling authors, that is, inferring personal characteristics from text, can reveal many things, such as their age, gender, personality traits, location, even though writers might not consciously choose to put indicators of those characteristics in the text. The uses for this are obvious, for cases like targeted advertising and other use cases, such as security, but it is also interesting from a linguistic standpoint.
In the shared task on author profiling BIBREF0 , organised within the PAN framework BIBREF1 , the aim is to infer Twitter users' gender and language variety from their tweets in four different languages: English, Spanish, Arabic, and Portuguese. Gender consists of a binary classification (male/female), whereas language variety differs per language, from 2 varieties for Portuguese (Brazilian and Portugal) to 7 varieties for Spanish (Argentina, Chile, Colombia, Mexico, Peru, Spain, Venezuela). The challenge is thus to classify users along two very different axes, and in four highly different languages – forcing participants to either build models that can capture these traits very generally (language-independent) or tailor-make models for each language or subtask.
Even when looking at the two tasks separately, it looks like the very same features could be reliable clues for classification. Indeed, for both profiling authors on Twitter as well as for discriminating between similar languages, word and character n-grams have proved to be the strongest predictors of gender as well as language varieties. For language varieties discrimination, the systems that performed best at the DSL shared tasks in 2016 (on test set B, i.e. social media) used word/character n-grams, independently of the algorithm BIBREF2 . The crucial contribution of these features was also observed by BIBREF3 , BIBREF4 , who participated in the 2017 DSL shared task with the two best performing systems. For author profiling, it has been shown that tf-idf weighted n-gram features, both in terms of characters and words, are very successful in capturing especially gender distinctions BIBREF5 . If different aspects such as language variety and gender of a speaker on Twitter might be captured by the same features, can we build a single model that will characterise both aspects at once?
In the context of the PAN 2017 competition on user profiling we therefore experimented with enriching a basic character and word n-gram model by including a variety of features that we believed should work. We also tried to view the task jointly and model the two problems as one single label, but single modelling worked best.
In this paper we report how our final submitted system works, and provide some general data analysis, but we also devote substantial space to describing what we tried (under which motivations), as we believe this is very informative towards future developments of author profiling systems.
Final System
After an extensive grid-search we submitted as our final run, a simple SVM system (using the scikit-learn LinearSVM implementation) that uses character 3- to 5-grams and word 1- to 2-grams with tf-idf weighting with sublinear term frequency scaling, where instead of the standard term frequency the following is used:
INLINEFORM0
We ran the grid search over both tasks and all languages on a 64-core machine with 1 TB RAM (see Table TABREF2 for the list of values over which the grid search was performed). The full search took about a day to complete. In particular, using min_df=2 (i.e. excluding all terms that are used by only one author) seems to have a strong positive effect and greatly reduces the feature size as there are many words that appear only once. The different optimal parameters for different languages provided only a slight performance boost for each language. We decided that this increase was too small to be significant, so we decided to use a single parameter set for all languages and both tasks.
Data Analysis
The training dataset provided consist of 11400 sets of tweets, each set representing a single author. The target labels are evenly distributed across variety and gender. The labels for the gender classification task are `male' and `female'. Table TABREF4 shows the labels for the language variation task and also shows the data distribution across languages.
We produced two visualisations, one per label (i.e. variety and gender), in order to gain some insights that could help the feature engineering process. For the variety label we trained a decision tree classifier using word unigrams: although the performance is poor (accuracy score of 0.63) this setup has the benefit of being easy to interpret: Figure FIGREF3 shows which features are used for the first splits of the tree.
We also created a visualisation of the English dataset using the tool described in BIBREF6 , and comparing the most frequent words used by males to those used by females. The visualisation shown in Figure SECREF6 indicates several interesting things about the gendered use of language. The words used often by males and very seldom by females are often sport-related, and include words such as “league”, and “chelsea”. There are several emojis that are used frequently by females and infrequently by males, e.g. “”, “”, as well as words like “kitten”, “mom”, “sister” and “chocolate”. In the top right of the visualisation we see words like “trump” and “sleep”, which indicates that these words are used very frequently, but equally so by both genders. This also shows that distinguishing words include both time-specific ones, like “gilmore” and “imacelebrityau”, and general words from everyday life, which are less likely to be subject to time-specific trends, like “player”, and “chocolate”.
Alternative Features and Methods: An Analysis of Negative Results
This section is meant to highlight all of the potential contributions to the systems which turned out to be detrimental to performance, when compared to the simpler system that we have described in Section SECREF2 . We divide our attempts according to the different ways we attempted to enhance performance: manipulating the data itself (adding more, and changing preprocessing), using a large variety of features, and changing strategies in modelling the problem by using different algorithms and paradigms. All reported results are on the PAN 2017 training data using five-fold cross-validation, unless otherwise specified.
Supplementary Data and Features
We extended the training dataset by adding data and gender labels from the PAN 16 Author Profiling shared task BIBREF5 . However, the additional data consistently resulted in lower cross-validation scores than when using only the training data provided with the PAN 17 task. One possible explanation for this is that our unigram model captures aspects that are tied specifically to the PAN 17 dataset, because it contains topics that may not be present in datasets that were collected in a different time period. To confirm this, we attempted to train on English data from PAN 17 and predict gender labels for the English data from PAN 16, as well as vice versa. Training on the PAN 16 data resulted in an accuracy score of 0.754 for the PAN 17 task, and training on PAN 17 gave an accuracy score of 0.70 for PAN 16, both scores significantly lower than cross-validated results on data from a single year.
We attempted to classify the English tweets by Gender using only the data collected by BIBREF7 . This dataset consists of aggregated word counts by gender for about 14,000 Twitter users and 9 million Tweets. We used this data to calculate whether each word in our dataset was a `male' word (used more by males), or a `female' word, and classified users as male or female based on a majority count of the words they used. Using this method we achieved 71.2 percent accuracy for the English gender data, showing that this simple method can provide a reasonable baseline to the gender task.
We experimented with different tokenization techniques for different languages, but our average results did not improve, so we decided to use the default scikit-learn tokenizer.
We tried adding POS-tags to the English tweets using the spaCy tagger: compared to the model using unigrams only the performances dropped slightly for gender and a bit more for variety:
It is not clear whether the missed increase in performance is due to the fact that the data are not normal (i.e. the tokenizer is not Twitter specific) or to the fact that POS tags confuse the classifier. Considering the results we decided not to include a POS-tagger in the final system.
()
In April 2015, SwiftKey did an extensive report on emoji use by country. They discovered that emoji use varies across languages and across language varieties. For example, they found that Australians use double the average amount of alcohol-themed emoji and use more junk food and holiday emoji than anywhere else in the world.
We tried to leverage these findings but the results were disappointing. We used a list of emojis as a vocabulary for the td/idf vectorizer. Encouraged by the results of the SwiftKey report, we tried first to use emojis as the only vocabulary and although the results are above the baseline and also quite high considering the type of features, they were still below the simple unigram model. Adding emojis as extra features to the unigram model also did not provide any improvement.
Since emojis are used across languages we built a single model for the four languages. We trained the model for the gender label on English, Portuguese and Arabic and tested it on Spanish: the system scored 0.67 in accuracy.
We looked at accuracy scores for the English gender and variety data more closely. We tried different representations of the tweet texts, to see what kind of words were most predictive of variety and gender. Specifically, we look at using only words that start with an uppercase letter, only words that start with a lowercase letter, only Twitter handles (words that start with an "@") and all the text excluding the handles.
It is interesting that the accuracies are so high although we are using only a basic unigram model, without looking at the character n-grams that we include in our final model. Representing each text only by the Twitter handles used in that text results in 0.77 accuracy for variety, probably because users tend to interact with other users who are in the same geographic area. However, excluding handles from the texts barely decreases performance for the variety task, showing that while the handles can be discriminative, they are not necessary for this task. It is also interesting to note that for this dataset, looking only at words beginning with an uppercase character results in nearly the same score for the Gender task as we get when using all of the available text, while using only lowercase words decreases performance. The opposite is true for the variety task, where using lowercase-only words results in as good performance as using all the text, but using only uppercase words decreases accuracy by over 10 percent.
We tried using the counts of geographical names related to the language varieties were as a feature. We also treated this list of locations as vocabulary for our model. Both these approaches did not improve our model.
We then tried enriching the data to improve the Unigram model. For each of the language varieties, we obtained 100 geographical location names, representing the cities with the most inhabitants. When this location was mentioned in the tweet, the language variety the location was part of was added to the tweet.
We attempted to use Twitter handles in a similar manner. The 100 most-followed Twitter users per language variety were found and the language variety was added to the text when one of its popular Twitter users was mentioned.
Unfortunately, this method did not improve our model. We suspect that the information is being captured by the n-gram model, which could explain why this did not improve performance.
We have tried the partial setup of last year's winning system, GronUP BIBREF8 , with the distinction that we had to classify language variety instead of age groups. We have excluded the features that are language-dependent (i.e. pos-tagging and misspelling/typos), and experimented with various feature combinations of the rest while keeping word and character n-grams the same. We achieved average accuracy from 0.810 to 0.830, which is clearly lower than our simple final model.
Modelling
We tried to build a single model that predicts at the same time both the language variety and the gender of each user: as expected (since the task is harder) the performance goes down when compared to a model trained independently on each label. However, as highlighted in Table TABREF21 , the results are still surprisingly high. To train the system we simply merged the two labels.
We experimented with Facebook's FastText system, which is an out-of-the-box supervised learning classifier BIBREF9 . We used only the data for the English gender task, trying both tweet-level and author-level classification. We pre-processed all text with the NLTK Tweet Tokenizer and used the classification-example script provided with the FastText code base. Training on 3,000 authors and testing on 600 authors gave an accuracy score of 0.64. Changing the FastText parameters such as number of epochs, word n-grams, and learning rate showed no improvement. We achieved an accuracy on 0.79 when we attempted to classify on a per-tweet basis (300,000 tweets for training and 85,071 for test), but this is an easier task as some authors are split over the training and test sets. There are various ways to summarise per-tweet predictions into author-predictions, but we did not experiment further as it seemed that the SVM system worked better for the amount of data we have.
In the final system we used the SVM classifier because it outperformed all the others that we tried. Table TABREF23 highlights the results.
Results on Test Data
For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). For the global scores, all languages are combined. We present finer-grained scores showing the breakdown per language in Table TABREF24 . We compare our gender and variety accuracies against the LDR-baseline BIBREF10 , a low dimensionality representation especially tailored to language variety identification, provided by the organisers. The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline).
Results are broken down per language, and are summarised as both joint and average scores. The joint score is is the percentage of texts for which both gender and variety were predicted correctly at the same time. The average is calculated as the mean over all languages.
N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task.
Conclusion
We conclude that, for the current author profiling task, a seemingly simple system using word and character n-grams and an SVM classifier proves very hard to beat. Indeed, N-GrAM turned out to be the best-performing out of the 22 systems submitted in this shared task. Using additional training data, `smart' features, and hand-crafted resources hurts rather than helps performance. A possible lesson to take from this would be that manually crafting features serves only to hinder a machine learning algorithm's ability to find patterns in a dataset, and perhaps it is better to focus one's efforts on parameter optimisation instead of feature engineering.
However, we believe that this is too strong a conclusion to draw from this limited study, since several factors specific to this setting need to be taken into account. For one, a support vector machine clearly outperforms other classifiers, but this does not mean that this is an inherently more powerful. Rather, we expect that an SVM is the best choice for the given amount of training data, but with more training data, a neural network-based approach would achieve better results.
Regarding the frustrating lack of benefit from more advanced features than n-grams, a possible explanation comes from a closer inspection of the data. Both the decision tree model (see Figure FIGREF3 ) and the data visualisation (see Figure SECREF6 ) give us an insight in the most discriminating features in the dataset. In the case of language variety, we see that place names can be informative features, and could therefore be used as a proxy for geographical location, which in turn serves as a proxy for language variety. Adding place names explicitly to our model did not yield performance improvements, which we take to indicate that this information is already captured by n-gram features. Whether and how geographical information in the text can be useful in identifying language variety, is a matter for future research.
In the case of gender, many useful features are ones that are highly specific to the Twitter platform (#iconnecthearts), time (cruz), and topics (pbsnewshour) in this dataset, which we suspect would not carry over well to other datasets, but provide high accuracy in this case. Conversely, features designed to capture gender in a more general sense do not yield any benefit over the more specific features, although they would likely be useful for a robust, cross-dataset system. These hypotheses could be assessed in the future by testing author profiling systems in a cross-platform, cross-time setting.
Scatter plot of terms commonly used by male and female English speakers. | Variety prediction task |
3604c4fba0a82d7139efd5ced47612c90bd10601 | 3604c4fba0a82d7139efd5ced47612c90bd10601_0 | Q: Is their implementation on CNN-DSA compared to GPU implementation in terms of power consumption, accuracy and speed?
Text: Introduction
The need to classify sentiment based on the multi-modal input arises in many different problems in customer related marketing fields. Super Characters BIBREF0 is a two-step method for sentiment analysis. It first converts text into images; then feeds the images into CNN models to classify the sentiment. Sentiment classification performance on large text contents from customer online comments shows that the Super Character method is superior to other existing methods. The Super Characters method also shows that the pretrained models on a larger dataset help improve accuracy by finetuning the CNN model on a smaller dataset. Compared with from-scratch trained Super Characters model, the finetuned one improves the accuracy from 95.7% to 97.8% on the well-known Chinese dataset of Fudan Corpus. Squared English Word (SEW) BIBREF1 is an extension of the Super Characters method into Latin Languages. With the wide availability of low-power CNN accelerator chips BIBREF2 BIBREF3, Super Characters method has the great potential to be deployed in large scale by saving power and fast inference speed. In addition, it is easy to deploy as well. The recent work also extend its applications to chatbot BIBREF4, image captioning BIBREF5, and also tabular data machine learning BIBREF6.
The CL-AFF Shared TaskBIBREF7 is part of the Affective Content Analysis workshop at AAAI 2020. It builds upon the OffMyChest datasetBIBREF8, which contains 12,860 samples of training data and 5,000 samples of testing data. Each sample is a multi-modal input containing both text and tabular data. The text input is an English sentence from Reddit. The tabular data is the corresponding log information for each sentence, like wordcount, created utc time and etc. And each sample has six sets of binary classification labels, EmotionDisclosure?(Yes$|$No), InformationDisclosure?(Yes$|$No), Support?(Yes$|$No), EmmotionSupport?(Yes$|$No), InformationSupport?(Yes$|$No), GeneralSupport?(Yes$|$No). In this paper, we will apply Super Characters on this data set to classify the muti-modal input.
Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution
For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9.
Experiments ::: Data Exploration
The training data set has 12,860 samples with 16 columns. The first ten columns are attributes, including sentenceid, author, nchar, created_utc, score, subreddit, label, full_text, wordcount, and id. And the other six columns are labels for each of the tasks of Emotion_disclosure, Information_disclosure, Support, Emmotion_support, Information_support, and General_support. Each task is a binary classification problem based on the ten attributes. So there will be 60 models to be trained for a 10-fold validation. The test data set has 5000 samples with only the ten columns of attributes. The system run will give labels on these test samples based on the 10-fold training.
For the training data, unique ids are 3634 compared to the whole training 12,860. While for the testing data, this number is only 2443 compared to the whole testing dataset 5000, meaning some of the records may come from the same discussion thread. And the unique authors are 7556 for training, and 3769 for testing, which means some of the authors are active that they may published more than one comments.
Based on this, we have considered to include author names in the multi-modal model as well, since a comment may be biased by the personality of its author. The maximum length of an author's name is 20 charactors, if SEW BIBREF1 is to be used to project the names onto a two-dimensional embedding. On the other hand, the nchar which indicates the number of characters for the full_text has a maximum value of 9993, and the maximum wordcount is 481. The column “label" has 37 unique values, which are different combinations of strings like “husband", “wife", “boyfriend", “girlfriend", and their abbreviations like “bf",“gf". The column “subreddit" is a categorical attribute with values in (“offmychest", “CasualConversation"). After converting the Unix time in the column of “created_utc", we found that the records are generated from 2017 to 2018. The column score has integers ranging from -44 to 1838 with 251 unique values.
Experiments ::: Design SuperCharacters Image
The sentence length distribution is given in Figure FIGREF3. The layout design for the full_text will be based on this. Since we present the English words using SEW BIBREF1 method, the size of each English word on the SuperCharacters image should better be calculated by (224/N)*(224/N) if the whole image is set to 224x224. Here N is an integer. The dimension is set to 224x224 because of the chip specification.
Experiments ::: Design SuperCharacters Image ::: Design Option One
In this design setting, we only include the full_text information and ignore the other attributes. If N=7, it means each row has 7 words, and each word has (224/7)*(224/7)=32*32 pixels. In this setting we can hold up to 49 words in full_text. For the records with words more than 49, the full_text will ignore the words from the 49th. In this case, only 0.86% of the training data and 1.98% of the testing data will have to cut the sentence at 49 words. An example of this design setting is in Figure FIGREF4.
Experiments ::: Design SuperCharacters Image ::: Design Option Two
If N=8, it means each row has 8 words, and each word has (224/8)*(224/8)=28*28 pixels. And if we set the cutlength=40, it means that we will have 5 rows for the full_text, and the other 3 rows will not be used for text, but all the space of the 224*(3*28) square pixels will be used for the tabular data given in the attributes other than full_text". For the records with words more than 40, the full_text will ignore the words from the 40th. In this case, only 2.03% of the training data and 4.14% of the testing data will have to cut the sentence at 40 words. We have the option to use the bottom part of the image to embed the other attributes. The id and sentenceid should be unrelated to the prediction, so these two attributes are not included. One example having the full_text, author, wordcount, created_utc, subreddit, score, nchar, and label is given in Figure FIGREF4.
However, the 10-fold training accuracy on this design is not good. This is partially because some of the attributes do not contribute to prediction but adds more noise instead. For example, the created time may not be very related to the prediction of the tasks but occupies a good portion of the embedding area of the image. In addition, since most of the wordcounts are centered around less than twenty, the two-dimensional embeddings of the full_text should have better resolution if the cutlength is smaller than 40. So the font size will be larger and easier for CNN to learn.
Experiments ::: Design SuperCharacters Image ::: Design Option Three
This design setting cuts the cut length of the full_text sentence to 42, and leave the space of the last row for some important attributes, including subreddit, wordcount, score, and label. An example of this design setting is in Figure FIGREF4.
Experiments ::: Design SuperCharacters Image ::: Design Option Four
This is data augmentation for Design Option Three. For a small data set, we need more data with the same semantic meaning generated from the raw labeled data without adding any noise. For Super Characters, the text are projected into the image. Adding some spaces at the front should not change the semantic meaning, and at the same time increased the number of generated Super Characters images. For each sentence, if the sentence length is less than 42, we will add one space at the front and then generate the Super Characters image. This process iterates until the length of the sentence with the added space reaches 42. An example of this design setting is in Figure FIGREF4.
Experiments ::: Experimental Results
After comparison, only Design Option One and Design Option Four are kept for the entire 10-fold training and validation.
For the system runs, it is limited to submit a maximum of 10 system runs. So, only the first five 10-folds models on both Design Option One and Design Option Four are tested against the 5000 testing data and submitted. The details of these 10 system runs are given in Table TABREF10$-$TABREF15.
In general, Design Option Four are a little better than Design Option One, but these results are still not good. The results are a little better than constantly predict one class. We can see that the results on this OffMyChest data is not as good as on AffCon19 CLAFF shared task. And compared with Super Characters on Wikipedia data set, the accuracy on this data is not as accurate as well.
Several methods could be used to further improve the accuracy. First, pretrained model may help improve. For this shared task, the size of training examples are relatively small to understand the complex definition of these 6 tasks. Second, other data augmentation method could be introduced in order to further boost the accuracy. For example, replacing word with its synonyms. Third, the data set is skewed data set. We can balance the data set by upsampling.
Conclusion
In this paper, we proposed modified version of Super Characters, in order to make it work on multi-modal data. In the case of this AffCon CLAFF shared task, the multi-modal data includes text data and tabular data. In addition, we deploy the models on low-power CNN chips, which proves the feasibility of applying DNN models with consideration of real-world practical concerns such as power and speed. The Super Characters method is relatively new and starts to attrack attentions for application scenarios. Pretrained models on large corpus would be very helpful for the Super Characters method, as success of pretrained model is observed for NLP models like ELMO and BERT. For fine-tuning on small datasets, data augmentation should further boost the generalization capability. | No |
38e2f07ba965b676a99be06e8872dade7c04722a | 38e2f07ba965b676a99be06e8872dade7c04722a_0 | Q: Does this implementation on CNN-DSA lead to diminishing of performance?
Text: Introduction
The need to classify sentiment based on the multi-modal input arises in many different problems in customer related marketing fields. Super Characters BIBREF0 is a two-step method for sentiment analysis. It first converts text into images; then feeds the images into CNN models to classify the sentiment. Sentiment classification performance on large text contents from customer online comments shows that the Super Character method is superior to other existing methods. The Super Characters method also shows that the pretrained models on a larger dataset help improve accuracy by finetuning the CNN model on a smaller dataset. Compared with from-scratch trained Super Characters model, the finetuned one improves the accuracy from 95.7% to 97.8% on the well-known Chinese dataset of Fudan Corpus. Squared English Word (SEW) BIBREF1 is an extension of the Super Characters method into Latin Languages. With the wide availability of low-power CNN accelerator chips BIBREF2 BIBREF3, Super Characters method has the great potential to be deployed in large scale by saving power and fast inference speed. In addition, it is easy to deploy as well. The recent work also extend its applications to chatbot BIBREF4, image captioning BIBREF5, and also tabular data machine learning BIBREF6.
The CL-AFF Shared TaskBIBREF7 is part of the Affective Content Analysis workshop at AAAI 2020. It builds upon the OffMyChest datasetBIBREF8, which contains 12,860 samples of training data and 5,000 samples of testing data. Each sample is a multi-modal input containing both text and tabular data. The text input is an English sentence from Reddit. The tabular data is the corresponding log information for each sentence, like wordcount, created utc time and etc. And each sample has six sets of binary classification labels, EmotionDisclosure?(Yes$|$No), InformationDisclosure?(Yes$|$No), Support?(Yes$|$No), EmmotionSupport?(Yes$|$No), InformationSupport?(Yes$|$No), GeneralSupport?(Yes$|$No). In this paper, we will apply Super Characters on this data set to classify the muti-modal input.
Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution
For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9.
Experiments ::: Data Exploration
The training data set has 12,860 samples with 16 columns. The first ten columns are attributes, including sentenceid, author, nchar, created_utc, score, subreddit, label, full_text, wordcount, and id. And the other six columns are labels for each of the tasks of Emotion_disclosure, Information_disclosure, Support, Emmotion_support, Information_support, and General_support. Each task is a binary classification problem based on the ten attributes. So there will be 60 models to be trained for a 10-fold validation. The test data set has 5000 samples with only the ten columns of attributes. The system run will give labels on these test samples based on the 10-fold training.
For the training data, unique ids are 3634 compared to the whole training 12,860. While for the testing data, this number is only 2443 compared to the whole testing dataset 5000, meaning some of the records may come from the same discussion thread. And the unique authors are 7556 for training, and 3769 for testing, which means some of the authors are active that they may published more than one comments.
Based on this, we have considered to include author names in the multi-modal model as well, since a comment may be biased by the personality of its author. The maximum length of an author's name is 20 charactors, if SEW BIBREF1 is to be used to project the names onto a two-dimensional embedding. On the other hand, the nchar which indicates the number of characters for the full_text has a maximum value of 9993, and the maximum wordcount is 481. The column “label" has 37 unique values, which are different combinations of strings like “husband", “wife", “boyfriend", “girlfriend", and their abbreviations like “bf",“gf". The column “subreddit" is a categorical attribute with values in (“offmychest", “CasualConversation"). After converting the Unix time in the column of “created_utc", we found that the records are generated from 2017 to 2018. The column score has integers ranging from -44 to 1838 with 251 unique values.
Experiments ::: Design SuperCharacters Image
The sentence length distribution is given in Figure FIGREF3. The layout design for the full_text will be based on this. Since we present the English words using SEW BIBREF1 method, the size of each English word on the SuperCharacters image should better be calculated by (224/N)*(224/N) if the whole image is set to 224x224. Here N is an integer. The dimension is set to 224x224 because of the chip specification.
Experiments ::: Design SuperCharacters Image ::: Design Option One
In this design setting, we only include the full_text information and ignore the other attributes. If N=7, it means each row has 7 words, and each word has (224/7)*(224/7)=32*32 pixels. In this setting we can hold up to 49 words in full_text. For the records with words more than 49, the full_text will ignore the words from the 49th. In this case, only 0.86% of the training data and 1.98% of the testing data will have to cut the sentence at 49 words. An example of this design setting is in Figure FIGREF4.
Experiments ::: Design SuperCharacters Image ::: Design Option Two
If N=8, it means each row has 8 words, and each word has (224/8)*(224/8)=28*28 pixels. And if we set the cutlength=40, it means that we will have 5 rows for the full_text, and the other 3 rows will not be used for text, but all the space of the 224*(3*28) square pixels will be used for the tabular data given in the attributes other than full_text". For the records with words more than 40, the full_text will ignore the words from the 40th. In this case, only 2.03% of the training data and 4.14% of the testing data will have to cut the sentence at 40 words. We have the option to use the bottom part of the image to embed the other attributes. The id and sentenceid should be unrelated to the prediction, so these two attributes are not included. One example having the full_text, author, wordcount, created_utc, subreddit, score, nchar, and label is given in Figure FIGREF4.
However, the 10-fold training accuracy on this design is not good. This is partially because some of the attributes do not contribute to prediction but adds more noise instead. For example, the created time may not be very related to the prediction of the tasks but occupies a good portion of the embedding area of the image. In addition, since most of the wordcounts are centered around less than twenty, the two-dimensional embeddings of the full_text should have better resolution if the cutlength is smaller than 40. So the font size will be larger and easier for CNN to learn.
Experiments ::: Design SuperCharacters Image ::: Design Option Three
This design setting cuts the cut length of the full_text sentence to 42, and leave the space of the last row for some important attributes, including subreddit, wordcount, score, and label. An example of this design setting is in Figure FIGREF4.
Experiments ::: Design SuperCharacters Image ::: Design Option Four
This is data augmentation for Design Option Three. For a small data set, we need more data with the same semantic meaning generated from the raw labeled data without adding any noise. For Super Characters, the text are projected into the image. Adding some spaces at the front should not change the semantic meaning, and at the same time increased the number of generated Super Characters images. For each sentence, if the sentence length is less than 42, we will add one space at the front and then generate the Super Characters image. This process iterates until the length of the sentence with the added space reaches 42. An example of this design setting is in Figure FIGREF4.
Experiments ::: Experimental Results
After comparison, only Design Option One and Design Option Four are kept for the entire 10-fold training and validation.
For the system runs, it is limited to submit a maximum of 10 system runs. So, only the first five 10-folds models on both Design Option One and Design Option Four are tested against the 5000 testing data and submitted. The details of these 10 system runs are given in Table TABREF10$-$TABREF15.
In general, Design Option Four are a little better than Design Option One, but these results are still not good. The results are a little better than constantly predict one class. We can see that the results on this OffMyChest data is not as good as on AffCon19 CLAFF shared task. And compared with Super Characters on Wikipedia data set, the accuracy on this data is not as accurate as well.
Several methods could be used to further improve the accuracy. First, pretrained model may help improve. For this shared task, the size of training examples are relatively small to understand the complex definition of these 6 tasks. Second, other data augmentation method could be introduced in order to further boost the accuracy. For example, replacing word with its synonyms. Third, the data set is skewed data set. We can balance the data set by upsampling.
Conclusion
In this paper, we proposed modified version of Super Characters, in order to make it work on multi-modal data. In the case of this AffCon CLAFF shared task, the multi-modal data includes text data and tabular data. In addition, we deploy the models on low-power CNN chips, which proves the feasibility of applying DNN models with consideration of real-world practical concerns such as power and speed. The Super Characters method is relatively new and starts to attrack attentions for application scenarios. Pretrained models on large corpus would be very helpful for the Super Characters method, as success of pretrained model is observed for NLP models like ELMO and BERT. For fine-tuning on small datasets, data augmentation should further boost the generalization capability. | Unanswerable |
931a2a13a1f6a8d9107d26811089bdccc39b0800 | 931a2a13a1f6a8d9107d26811089bdccc39b0800_0 | Q: How is Super Character method modified to handle tabular data also?
Text: Introduction
The need to classify sentiment based on the multi-modal input arises in many different problems in customer related marketing fields. Super Characters BIBREF0 is a two-step method for sentiment analysis. It first converts text into images; then feeds the images into CNN models to classify the sentiment. Sentiment classification performance on large text contents from customer online comments shows that the Super Character method is superior to other existing methods. The Super Characters method also shows that the pretrained models on a larger dataset help improve accuracy by finetuning the CNN model on a smaller dataset. Compared with from-scratch trained Super Characters model, the finetuned one improves the accuracy from 95.7% to 97.8% on the well-known Chinese dataset of Fudan Corpus. Squared English Word (SEW) BIBREF1 is an extension of the Super Characters method into Latin Languages. With the wide availability of low-power CNN accelerator chips BIBREF2 BIBREF3, Super Characters method has the great potential to be deployed in large scale by saving power and fast inference speed. In addition, it is easy to deploy as well. The recent work also extend its applications to chatbot BIBREF4, image captioning BIBREF5, and also tabular data machine learning BIBREF6.
The CL-AFF Shared TaskBIBREF7 is part of the Affective Content Analysis workshop at AAAI 2020. It builds upon the OffMyChest datasetBIBREF8, which contains 12,860 samples of training data and 5,000 samples of testing data. Each sample is a multi-modal input containing both text and tabular data. The text input is an English sentence from Reddit. The tabular data is the corresponding log information for each sentence, like wordcount, created utc time and etc. And each sample has six sets of binary classification labels, EmotionDisclosure?(Yes$|$No), InformationDisclosure?(Yes$|$No), Support?(Yes$|$No), EmmotionSupport?(Yes$|$No), InformationSupport?(Yes$|$No), GeneralSupport?(Yes$|$No). In this paper, we will apply Super Characters on this data set to classify the muti-modal input.
Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution
For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9.
Experiments ::: Data Exploration
The training data set has 12,860 samples with 16 columns. The first ten columns are attributes, including sentenceid, author, nchar, created_utc, score, subreddit, label, full_text, wordcount, and id. And the other six columns are labels for each of the tasks of Emotion_disclosure, Information_disclosure, Support, Emmotion_support, Information_support, and General_support. Each task is a binary classification problem based on the ten attributes. So there will be 60 models to be trained for a 10-fold validation. The test data set has 5000 samples with only the ten columns of attributes. The system run will give labels on these test samples based on the 10-fold training.
For the training data, unique ids are 3634 compared to the whole training 12,860. While for the testing data, this number is only 2443 compared to the whole testing dataset 5000, meaning some of the records may come from the same discussion thread. And the unique authors are 7556 for training, and 3769 for testing, which means some of the authors are active that they may published more than one comments.
Based on this, we have considered to include author names in the multi-modal model as well, since a comment may be biased by the personality of its author. The maximum length of an author's name is 20 charactors, if SEW BIBREF1 is to be used to project the names onto a two-dimensional embedding. On the other hand, the nchar which indicates the number of characters for the full_text has a maximum value of 9993, and the maximum wordcount is 481. The column “label" has 37 unique values, which are different combinations of strings like “husband", “wife", “boyfriend", “girlfriend", and their abbreviations like “bf",“gf". The column “subreddit" is a categorical attribute with values in (“offmychest", “CasualConversation"). After converting the Unix time in the column of “created_utc", we found that the records are generated from 2017 to 2018. The column score has integers ranging from -44 to 1838 with 251 unique values.
Experiments ::: Design SuperCharacters Image
The sentence length distribution is given in Figure FIGREF3. The layout design for the full_text will be based on this. Since we present the English words using SEW BIBREF1 method, the size of each English word on the SuperCharacters image should better be calculated by (224/N)*(224/N) if the whole image is set to 224x224. Here N is an integer. The dimension is set to 224x224 because of the chip specification.
Experiments ::: Design SuperCharacters Image ::: Design Option One
In this design setting, we only include the full_text information and ignore the other attributes. If N=7, it means each row has 7 words, and each word has (224/7)*(224/7)=32*32 pixels. In this setting we can hold up to 49 words in full_text. For the records with words more than 49, the full_text will ignore the words from the 49th. In this case, only 0.86% of the training data and 1.98% of the testing data will have to cut the sentence at 49 words. An example of this design setting is in Figure FIGREF4.
Experiments ::: Design SuperCharacters Image ::: Design Option Two
If N=8, it means each row has 8 words, and each word has (224/8)*(224/8)=28*28 pixels. And if we set the cutlength=40, it means that we will have 5 rows for the full_text, and the other 3 rows will not be used for text, but all the space of the 224*(3*28) square pixels will be used for the tabular data given in the attributes other than full_text". For the records with words more than 40, the full_text will ignore the words from the 40th. In this case, only 2.03% of the training data and 4.14% of the testing data will have to cut the sentence at 40 words. We have the option to use the bottom part of the image to embed the other attributes. The id and sentenceid should be unrelated to the prediction, so these two attributes are not included. One example having the full_text, author, wordcount, created_utc, subreddit, score, nchar, and label is given in Figure FIGREF4.
However, the 10-fold training accuracy on this design is not good. This is partially because some of the attributes do not contribute to prediction but adds more noise instead. For example, the created time may not be very related to the prediction of the tasks but occupies a good portion of the embedding area of the image. In addition, since most of the wordcounts are centered around less than twenty, the two-dimensional embeddings of the full_text should have better resolution if the cutlength is smaller than 40. So the font size will be larger and easier for CNN to learn.
Experiments ::: Design SuperCharacters Image ::: Design Option Three
This design setting cuts the cut length of the full_text sentence to 42, and leave the space of the last row for some important attributes, including subreddit, wordcount, score, and label. An example of this design setting is in Figure FIGREF4.
Experiments ::: Design SuperCharacters Image ::: Design Option Four
This is data augmentation for Design Option Three. For a small data set, we need more data with the same semantic meaning generated from the raw labeled data without adding any noise. For Super Characters, the text are projected into the image. Adding some spaces at the front should not change the semantic meaning, and at the same time increased the number of generated Super Characters images. For each sentence, if the sentence length is less than 42, we will add one space at the front and then generate the Super Characters image. This process iterates until the length of the sentence with the added space reaches 42. An example of this design setting is in Figure FIGREF4.
Experiments ::: Experimental Results
After comparison, only Design Option One and Design Option Four are kept for the entire 10-fold training and validation.
For the system runs, it is limited to submit a maximum of 10 system runs. So, only the first five 10-folds models on both Design Option One and Design Option Four are tested against the 5000 testing data and submitted. The details of these 10 system runs are given in Table TABREF10$-$TABREF15.
In general, Design Option Four are a little better than Design Option One, but these results are still not good. The results are a little better than constantly predict one class. We can see that the results on this OffMyChest data is not as good as on AffCon19 CLAFF shared task. And compared with Super Characters on Wikipedia data set, the accuracy on this data is not as accurate as well.
Several methods could be used to further improve the accuracy. First, pretrained model may help improve. For this shared task, the size of training examples are relatively small to understand the complex definition of these 6 tasks. Second, other data augmentation method could be introduced in order to further boost the accuracy. For example, replacing word with its synonyms. Third, the data set is skewed data set. We can balance the data set by upsampling.
Conclusion
In this paper, we proposed modified version of Super Characters, in order to make it work on multi-modal data. In the case of this AffCon CLAFF shared task, the multi-modal data includes text data and tabular data. In addition, we deploy the models on low-power CNN chips, which proves the feasibility of applying DNN models with consideration of real-world practical concerns such as power and speed. The Super Characters method is relatively new and starts to attrack attentions for application scenarios. Pretrained models on large corpus would be very helpful for the Super Characters method, as success of pretrained model is observed for NLP models like ELMO and BERT. For fine-tuning on small datasets, data augmentation should further boost the generalization capability. | simply split the image into two parts. One for the text input, and the other for the tabular data |
8c981f8b992cb583e598f71741c322f522c6d2ad | 8c981f8b992cb583e598f71741c322f522c6d2ad_0 | Q: How are the substitution rules built?
Text: Introduction
In text mining and Natural Language Processing (NLP), a lemmatizer is a tool used to determine the basic form of a word (lemma). Lemmatization differs from stemming in the way this base form is determined. While stemmers chop off word endings to reach the common stem of words, lemmatizers take into account the morphology of the words in order to produce the common morphological base form, i.e., the form of the word found in a dictionary. This type of text normalization is an important step in pre-processing morphologically complex languages, like Icelandic, before conducting various tasks, such as machine translation, text mining and information retrieval.
To give an example from the Icelandic language, lemmatization helps find all instances of the personal pronoun ég “I” in a text corpus, taking into account all inflectional forms (ég, mig, mér, mín, við, okkur, and okkar). These variations of each word can be up to 16 for nouns and over a hundred for adjectives and verbs. The value of being able to reduce the number of different surface forms that appear for each word is therefore evident, as otherwise it is hard or even impossible to correctly determine word frequency in a corpus, or to look up all instances of a particular term.
In this paper, we describe and evaluate Nefnir BIBREF0 , a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules derived (learned) from the Database of Modern Icelandic Inflection (DMII) BIBREF1 , which contains over 5.8 million inflectional forms.
This new lemmatizer was used for large-scale lemmatization of the Icelandic Gigaword Corpus BIBREF2 with promising results, but a formal evaluation had not been carried out. Our evaluation of Nefnir indicates that, compared to previously published results, it obtains the highest lemmatization accuracy of Icelandic, with 99.55% accuracy given correct part-of-speech (PoS) tags, and 96.88% accuracy given text tagged with a PoS tagger.
Related work
The most basic approach to lemmatization is a simple look-up in a lexicon. This method has the obvious drawback that words that are not in the lexicon cannot be processed. To solve this, word transformation rules have been used to analyze the surface form of the word (the token) in order to produce the base form. These rules can either be hand-crafted or learned automatically using machine learning. When hand-crafting the rules that are used to determine the lemmas, a thorough knowledge of the morphological features of the language is needed. This is a time-consuming task, further complicated in Icelandic by the extensive inflectional system BIBREF1 . An example of a hand-crafted lemmatizer is the morphological analyzer that is part of the Czech Dependency Treebank BIBREF3 .
Machine learning methods emerged to make the rule-learning process more effective, and various algorithms have been developed. These methods rely on training data, which can be a corpus of words and their lemmas or a large morphological lexicon BIBREF4 . By analyzing the training data, transformation rules are formed, which can subsequently be used to find lemmas in new texts, given the word forms.
In addition, maching learning lemmatizers based on deep neural networks (DNNs) have recently emerged (see for example finnlem BIBREF5 for Finnish and LemmaTag BIBREF6 for German, Czech and Arabic). Along with the best rule-derived machine learning methods, these are now the state-of-the-art approaches to lemmatizers for morphologically complex languages. The biggest problem in lemmatization is the issue of unknown words, i.e. words not found in the training corpus or the underlying lexicon of the lemmatizer. This has been handled in various ways, such as by only looking at the suffix of a word to determine the lemma, thereby lemmatizing unseen words that (hopefully) share the same morphological rules as a known word BIBREF7 . DNN-based lemmatizers may prove useful in solving this issue, as they have their own inherent ways of handling these out-of-vocabulary (OOV) words, such as by using character-level context BIBREF8 .
Previous to Nefnir, two lemmatization tools had been developed for Icelandic. We will now briefly mention these lemmatizers, before describing Nefnir further.
CST Lemmatizer
The CST Lemmatizer BIBREF4 is a rule-based lemmatizer that has been trained for Icelandic on the Icelandic Frequency Dictionary (IFD) corpus, consisting of about 590,000 tokens BIBREF9 . This is a language-independent lemmatizer that only looks at the suffix of the word as a way of lemmatizing OOV words, and can be used on both tagged and untagged input.
The authors of Lemmald (see Section SECREF2 ) trained and evaluated the CST Lemmatizer on the IFD and observed a 98.99% accuracy on correctly tagged text and 93.15% accuracy on untagged text, in a 10-fold cross-validation, where each test set contained about 60,000 tokens. Another evaluation of this lemmatizer for Icelandic BIBREF10 reports around 90% accuracy on a random sample of 600 words from the IFD, when the input has been PoS tagged automatically (with a tagging accuracy of 91.5%). The PoS tagger used was IceTagger BIBREF11 , which is part of the IceNLP natural language processing toolkit BIBREF12 . These results indicate that the accuracy of this lemmatizer is very dependent upon the tags it is given. To our knowledge, the Icelandic CST Lemmatizer model is not openly available.
Lemmald
The second tool is Lemmald BIBREF13 , which is part of the IceNLP toolkit. It uses a mixed method of data-driven machine learning (using the IFD as a training corpus) and linguistic rules, as well as providing the option of looking up word forms in the DMII. Given correct PoS tagging of the input, Lemmald's accuracy measures at 98.54%, in a 10-fold cross-validation. The authors note that the CST Lemmatizer performs better than Lemmald when trained on the same data, without the added DMII lookup. The DMII lookup for Lemmald delivers a statistically significant improvement on the accuracy (99.55%), but it is not provided with the IceNLP distribution, so this enhancement is not available for public use. When used for lemmatization of the Icelandic Tagged Corpus (MÍM) BIBREF14 , the lemmatization accuracy of Lemmald was roughly estimated at around 90%.
System Description
The main difference between Nefnir and the two previously described lemmatizers for Icelandic, CST Lemmatizer and Lemmald, is that Nefnir derives its rules from a morphological database, the DMII, whereas the other two are trained on a corpus, the IFD. Note that the IFD only consists of about 590,000 tokens, while the DMII contains over 5.8 million inflectional forms.
Nefnir uses suffix substitution rules, derived from the DMII to lemmatize tagged text. An example of such a rule is (ngar, nkfn, ar INLINEFORM0 ur), which can be applied to any word form with the suffix ngar that has the PoS tag nkfn (a masculine plural noun in the nominative case), transforming the suffix from ar to ur. This rule could, for example, be applied to the word form kettlingar “kittens” to obtain the corresponding lemma, kettlingur. Words are lemmatized using the rule with the longest shared suffix and the same tag.
Each inflectional form in the DMII is annotated with a grammatical tag and lemma. As the DMII is limited to inflected words, the training data is supplemented with a hand-curated list of approximately 4,500 uninflected words (such as adverbs, conjunctions and prepositions) and abbreviations.
To account for subtle differences between the tagsets used in the DMII and by the Icelandic PoS taggers, Nefnir translates all tags to an intermediate tagset which is a subset of both.
Rules are successively generated and applied to the training set, with each new rule minimizing the number of remaining errors. Rules continue to be generated until the number of errors cannot be reduced. The process is as follows:
Rules are only generated if they can correctly lemmatize at least two examples in the training set. A dictionary is created for words which are incorrectly lemmatized by the rules, for example because they require a unique transformation, such as from við “we” to ég “I”. Once trained, Nefnir lemmatizes words using the dictionary if they are present, or else with the most specific applicable rule.
A rule is generated for every suffix in a word form, with some restrictions. For base words, Nefnir considers all suffixes, from the empty string to the full word. For skó “shoes”, an inflected form of the word skór “shoe”, rules are generated for the suffixes INLINEFORM0 , ó, kó and skó. However, Nefnir does not create rules for suffixes that are shorter than the transformation required to lemmatize the word. For example, for bækur “books”, which requires the transformation ækur INLINEFORM1 ók (the lemma for bækur is bók), only the suffixes ækur and bækur are considered.
Compounding is highly productive in Icelandic and compound words comprise a very large portion of the vocabulary. This is reflected in the DMII, where over 88% of all words are compounds BIBREF15 . Any of the open word classes can be combined to form a compound, and there is no theoretical limit to how many words they can consist of. Due to the abundance of compounds in the training data, and the freedom with which they can be formed, Nefnir places additional restrictions on which suffixes to consider when generating rules for them. Suffixes for the final part of a compound are generated in the same manner as for base words, growing part by part thereafter. For example, the compound word fjall+göngu+skó “hiking boots” would yield rules for the suffixes INLINEFORM0 , ó, kó, skó, gönguskó and fjallgönguskó. Allowing suffixes to grow freely past the final part of the compound may result in overfitting as the rules adapt to incidental patterns in the training data.
Evaluation
We have evaluated the output of Nefnir against a reference corpus of 21,093 tokens and their correct lemmas.
Samples for the reference corpus were extracted from two larger corpora, in order to obtain a diverse vocabulary:
Samples were extracted at random from these two corpora, roughly 10,000 tokens from each, and the lemmas manually reviewed, following the criteria laid out in the preface of the IFD BIBREF9 .
The incentive when performing the evaluation was to create a diverse corpus of text samples containing foreign words, misspellings and other OOV words. Such words are likely to appear in real-world NLP tasks, and pose special problems for lemmatizers. In the proofread and literature-heavy IFD corpus, which was used for training and evaluating the previous two lemmatizers, these OOV words are less prevalent. Consequently, the test corpus used here is not directly comparable with the corpus used to evaluate Lemmald and the CST Lemmatizer for Icelandic. On the other hand, it is more diverse and offers more challenging problems for the lemmatizer.
One of the motivations of this work was to determine how well Nefnir performs when lemmatizing text which has been PoS tagged automatically, without any manual review, as such manual labour is usually not feasible in large-scale NLP tasks. For this purpose, we created two versions of the test corpus, one with the correct PoS tags, and another tagged using IceTagger BIBREF11 . The accuracy of IceTagger is further enhanced using data from the DMII. Measured against the correct PoS tags, the accuracy of the PoS tags in the reference corpus is 95.47%.
Accuracy of the lemmatizaton was measured by comparing the reference corpus lemmas with the obtained lemmas from Nefnir. This was done for both the correctly tagged corpus (gold tags) and the automatically tagged one (IceTagger tags). As seen in Table TABREF10 , the accuracy for the test file with the correct PoS tags is 99.55%, with 94 errors in 21,093 tokens. For the text tagged automatically with IceTagger, the accuracy is 96.88%, with 658 errors.
These results indicate that given correct PoS tags, Nefnir obtains high accuracy, with under a hundred errors in the whole corpus sample. This is comparable to the score reported for Lemmald, when DMII lookup has been added (99.55%). In fact, it can be argued that a higher score is hard to come by, as natural language always contains some unforeseen issues that are hard to accommodate for, such as OOV words, misspellings, colloquialisms, etc. When Nefnir bases its lemmas on the automatically PoS tagged text, the accuracy decreases, from 99.55% to 96.88%, resulting in six times as many errors.
We can classify the errors made by Nefnir into the following main categories:
The most prevalent error categories when the PoS tags are correct are foreign words and proper names, such as foreign names of people, products and companies. A special issue that often came up is the cliticized definite article in Icelandic proper names. This is quite common in organization names (Síminn, Samfylkingin), titles of works of art (Svanurinn), names of ships (Vonin), buildings (Kringlan), etc. Ultimately, it depends on the aim of the lemmatization how these should be handled, but in this evaluation we assume as a general rule that they should be lemmatized with the definite article (Síminn, and not sími or Sími). The same applies to the plural, in names such as Hjálmar “helmets” (band) and Katlar (place name).
In the automatically tagged data, tagging errors are the most common source of lemmatization errors, such as when læknum (referring to the plural dative of the masculine noun læknir “doctor”) is tagged as being in the singular, which leads to it being incorrectly lemmatized as lækur “brook”. This was to be expected, as the rules learned from the DMII rely on the correct tagging of the input. However, as the authors of Lemmald comment, as long as the word class is correct, the lemmatizer can usually still find the correct lemma BIBREF13 .
The main reason for the high accuracy in our view lies in the richness of the DMII data. No lexicon can ever include all words of a particular language, as new words appear every day, but most often, new words in Icelandic are compounds, created from words already present in the DMII. This explains how rare or unknown words such as the adjective fuglglaður “bird-happy”, which appears in the corpus data, can be correctly lemmatized using the suffix rule for glaður “happy”.
As mentioned above, Nefnir, the CST Lemmatizer for Icelandic, and Lemmald have not been evaluated using the same reference corpus. The accuracy of the three lemmatizers are, therefore, not directly comparable, but our results indicate that Nefnir obtains the highest accuracy.
Conclusion
We described and evaluated Nefnir, a new open source lemmatizer for Icelandic. It uses suffix substitution rules, derived from a large morphological database, to lemmatize tagged text. Evaluation shows that Nefnir obtains high accuracy for both correctly and automatically PoS-tagged input.
As taggers for Icelandic gradually get better, we can expect to see the lemmatization accuracy go up as well. Expanding the morphological database with more proper names may also help to achieve even higher accuracy. | from the Database of Modern Icelandic Inflection (DMII) BIBREF1 |
16f33de90b76975a99572e0684632d5aedbd957c | 16f33de90b76975a99572e0684632d5aedbd957c_0 | Q: Which dataset do they use?
Text: Introduction
In text mining and Natural Language Processing (NLP), a lemmatizer is a tool used to determine the basic form of a word (lemma). Lemmatization differs from stemming in the way this base form is determined. While stemmers chop off word endings to reach the common stem of words, lemmatizers take into account the morphology of the words in order to produce the common morphological base form, i.e., the form of the word found in a dictionary. This type of text normalization is an important step in pre-processing morphologically complex languages, like Icelandic, before conducting various tasks, such as machine translation, text mining and information retrieval.
To give an example from the Icelandic language, lemmatization helps find all instances of the personal pronoun ég “I” in a text corpus, taking into account all inflectional forms (ég, mig, mér, mín, við, okkur, and okkar). These variations of each word can be up to 16 for nouns and over a hundred for adjectives and verbs. The value of being able to reduce the number of different surface forms that appear for each word is therefore evident, as otherwise it is hard or even impossible to correctly determine word frequency in a corpus, or to look up all instances of a particular term.
In this paper, we describe and evaluate Nefnir BIBREF0 , a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules derived (learned) from the Database of Modern Icelandic Inflection (DMII) BIBREF1 , which contains over 5.8 million inflectional forms.
This new lemmatizer was used for large-scale lemmatization of the Icelandic Gigaword Corpus BIBREF2 with promising results, but a formal evaluation had not been carried out. Our evaluation of Nefnir indicates that, compared to previously published results, it obtains the highest lemmatization accuracy of Icelandic, with 99.55% accuracy given correct part-of-speech (PoS) tags, and 96.88% accuracy given text tagged with a PoS tagger.
Related work
The most basic approach to lemmatization is a simple look-up in a lexicon. This method has the obvious drawback that words that are not in the lexicon cannot be processed. To solve this, word transformation rules have been used to analyze the surface form of the word (the token) in order to produce the base form. These rules can either be hand-crafted or learned automatically using machine learning. When hand-crafting the rules that are used to determine the lemmas, a thorough knowledge of the morphological features of the language is needed. This is a time-consuming task, further complicated in Icelandic by the extensive inflectional system BIBREF1 . An example of a hand-crafted lemmatizer is the morphological analyzer that is part of the Czech Dependency Treebank BIBREF3 .
Machine learning methods emerged to make the rule-learning process more effective, and various algorithms have been developed. These methods rely on training data, which can be a corpus of words and their lemmas or a large morphological lexicon BIBREF4 . By analyzing the training data, transformation rules are formed, which can subsequently be used to find lemmas in new texts, given the word forms.
In addition, maching learning lemmatizers based on deep neural networks (DNNs) have recently emerged (see for example finnlem BIBREF5 for Finnish and LemmaTag BIBREF6 for German, Czech and Arabic). Along with the best rule-derived machine learning methods, these are now the state-of-the-art approaches to lemmatizers for morphologically complex languages. The biggest problem in lemmatization is the issue of unknown words, i.e. words not found in the training corpus or the underlying lexicon of the lemmatizer. This has been handled in various ways, such as by only looking at the suffix of a word to determine the lemma, thereby lemmatizing unseen words that (hopefully) share the same morphological rules as a known word BIBREF7 . DNN-based lemmatizers may prove useful in solving this issue, as they have their own inherent ways of handling these out-of-vocabulary (OOV) words, such as by using character-level context BIBREF8 .
Previous to Nefnir, two lemmatization tools had been developed for Icelandic. We will now briefly mention these lemmatizers, before describing Nefnir further.
CST Lemmatizer
The CST Lemmatizer BIBREF4 is a rule-based lemmatizer that has been trained for Icelandic on the Icelandic Frequency Dictionary (IFD) corpus, consisting of about 590,000 tokens BIBREF9 . This is a language-independent lemmatizer that only looks at the suffix of the word as a way of lemmatizing OOV words, and can be used on both tagged and untagged input.
The authors of Lemmald (see Section SECREF2 ) trained and evaluated the CST Lemmatizer on the IFD and observed a 98.99% accuracy on correctly tagged text and 93.15% accuracy on untagged text, in a 10-fold cross-validation, where each test set contained about 60,000 tokens. Another evaluation of this lemmatizer for Icelandic BIBREF10 reports around 90% accuracy on a random sample of 600 words from the IFD, when the input has been PoS tagged automatically (with a tagging accuracy of 91.5%). The PoS tagger used was IceTagger BIBREF11 , which is part of the IceNLP natural language processing toolkit BIBREF12 . These results indicate that the accuracy of this lemmatizer is very dependent upon the tags it is given. To our knowledge, the Icelandic CST Lemmatizer model is not openly available.
Lemmald
The second tool is Lemmald BIBREF13 , which is part of the IceNLP toolkit. It uses a mixed method of data-driven machine learning (using the IFD as a training corpus) and linguistic rules, as well as providing the option of looking up word forms in the DMII. Given correct PoS tagging of the input, Lemmald's accuracy measures at 98.54%, in a 10-fold cross-validation. The authors note that the CST Lemmatizer performs better than Lemmald when trained on the same data, without the added DMII lookup. The DMII lookup for Lemmald delivers a statistically significant improvement on the accuracy (99.55%), but it is not provided with the IceNLP distribution, so this enhancement is not available for public use. When used for lemmatization of the Icelandic Tagged Corpus (MÍM) BIBREF14 , the lemmatization accuracy of Lemmald was roughly estimated at around 90%.
System Description
The main difference between Nefnir and the two previously described lemmatizers for Icelandic, CST Lemmatizer and Lemmald, is that Nefnir derives its rules from a morphological database, the DMII, whereas the other two are trained on a corpus, the IFD. Note that the IFD only consists of about 590,000 tokens, while the DMII contains over 5.8 million inflectional forms.
Nefnir uses suffix substitution rules, derived from the DMII to lemmatize tagged text. An example of such a rule is (ngar, nkfn, ar INLINEFORM0 ur), which can be applied to any word form with the suffix ngar that has the PoS tag nkfn (a masculine plural noun in the nominative case), transforming the suffix from ar to ur. This rule could, for example, be applied to the word form kettlingar “kittens” to obtain the corresponding lemma, kettlingur. Words are lemmatized using the rule with the longest shared suffix and the same tag.
Each inflectional form in the DMII is annotated with a grammatical tag and lemma. As the DMII is limited to inflected words, the training data is supplemented with a hand-curated list of approximately 4,500 uninflected words (such as adverbs, conjunctions and prepositions) and abbreviations.
To account for subtle differences between the tagsets used in the DMII and by the Icelandic PoS taggers, Nefnir translates all tags to an intermediate tagset which is a subset of both.
Rules are successively generated and applied to the training set, with each new rule minimizing the number of remaining errors. Rules continue to be generated until the number of errors cannot be reduced. The process is as follows:
Rules are only generated if they can correctly lemmatize at least two examples in the training set. A dictionary is created for words which are incorrectly lemmatized by the rules, for example because they require a unique transformation, such as from við “we” to ég “I”. Once trained, Nefnir lemmatizes words using the dictionary if they are present, or else with the most specific applicable rule.
A rule is generated for every suffix in a word form, with some restrictions. For base words, Nefnir considers all suffixes, from the empty string to the full word. For skó “shoes”, an inflected form of the word skór “shoe”, rules are generated for the suffixes INLINEFORM0 , ó, kó and skó. However, Nefnir does not create rules for suffixes that are shorter than the transformation required to lemmatize the word. For example, for bækur “books”, which requires the transformation ækur INLINEFORM1 ók (the lemma for bækur is bók), only the suffixes ækur and bækur are considered.
Compounding is highly productive in Icelandic and compound words comprise a very large portion of the vocabulary. This is reflected in the DMII, where over 88% of all words are compounds BIBREF15 . Any of the open word classes can be combined to form a compound, and there is no theoretical limit to how many words they can consist of. Due to the abundance of compounds in the training data, and the freedom with which they can be formed, Nefnir places additional restrictions on which suffixes to consider when generating rules for them. Suffixes for the final part of a compound are generated in the same manner as for base words, growing part by part thereafter. For example, the compound word fjall+göngu+skó “hiking boots” would yield rules for the suffixes INLINEFORM0 , ó, kó, skó, gönguskó and fjallgönguskó. Allowing suffixes to grow freely past the final part of the compound may result in overfitting as the rules adapt to incidental patterns in the training data.
Evaluation
We have evaluated the output of Nefnir against a reference corpus of 21,093 tokens and their correct lemmas.
Samples for the reference corpus were extracted from two larger corpora, in order to obtain a diverse vocabulary:
Samples were extracted at random from these two corpora, roughly 10,000 tokens from each, and the lemmas manually reviewed, following the criteria laid out in the preface of the IFD BIBREF9 .
The incentive when performing the evaluation was to create a diverse corpus of text samples containing foreign words, misspellings and other OOV words. Such words are likely to appear in real-world NLP tasks, and pose special problems for lemmatizers. In the proofread and literature-heavy IFD corpus, which was used for training and evaluating the previous two lemmatizers, these OOV words are less prevalent. Consequently, the test corpus used here is not directly comparable with the corpus used to evaluate Lemmald and the CST Lemmatizer for Icelandic. On the other hand, it is more diverse and offers more challenging problems for the lemmatizer.
One of the motivations of this work was to determine how well Nefnir performs when lemmatizing text which has been PoS tagged automatically, without any manual review, as such manual labour is usually not feasible in large-scale NLP tasks. For this purpose, we created two versions of the test corpus, one with the correct PoS tags, and another tagged using IceTagger BIBREF11 . The accuracy of IceTagger is further enhanced using data from the DMII. Measured against the correct PoS tags, the accuracy of the PoS tags in the reference corpus is 95.47%.
Accuracy of the lemmatizaton was measured by comparing the reference corpus lemmas with the obtained lemmas from Nefnir. This was done for both the correctly tagged corpus (gold tags) and the automatically tagged one (IceTagger tags). As seen in Table TABREF10 , the accuracy for the test file with the correct PoS tags is 99.55%, with 94 errors in 21,093 tokens. For the text tagged automatically with IceTagger, the accuracy is 96.88%, with 658 errors.
These results indicate that given correct PoS tags, Nefnir obtains high accuracy, with under a hundred errors in the whole corpus sample. This is comparable to the score reported for Lemmald, when DMII lookup has been added (99.55%). In fact, it can be argued that a higher score is hard to come by, as natural language always contains some unforeseen issues that are hard to accommodate for, such as OOV words, misspellings, colloquialisms, etc. When Nefnir bases its lemmas on the automatically PoS tagged text, the accuracy decreases, from 99.55% to 96.88%, resulting in six times as many errors.
We can classify the errors made by Nefnir into the following main categories:
The most prevalent error categories when the PoS tags are correct are foreign words and proper names, such as foreign names of people, products and companies. A special issue that often came up is the cliticized definite article in Icelandic proper names. This is quite common in organization names (Síminn, Samfylkingin), titles of works of art (Svanurinn), names of ships (Vonin), buildings (Kringlan), etc. Ultimately, it depends on the aim of the lemmatization how these should be handled, but in this evaluation we assume as a general rule that they should be lemmatized with the definite article (Síminn, and not sími or Sími). The same applies to the plural, in names such as Hjálmar “helmets” (band) and Katlar (place name).
In the automatically tagged data, tagging errors are the most common source of lemmatization errors, such as when læknum (referring to the plural dative of the masculine noun læknir “doctor”) is tagged as being in the singular, which leads to it being incorrectly lemmatized as lækur “brook”. This was to be expected, as the rules learned from the DMII rely on the correct tagging of the input. However, as the authors of Lemmald comment, as long as the word class is correct, the lemmatizer can usually still find the correct lemma BIBREF13 .
The main reason for the high accuracy in our view lies in the richness of the DMII data. No lexicon can ever include all words of a particular language, as new words appear every day, but most often, new words in Icelandic are compounds, created from words already present in the DMII. This explains how rare or unknown words such as the adjective fuglglaður “bird-happy”, which appears in the corpus data, can be correctly lemmatized using the suffix rule for glaður “happy”.
As mentioned above, Nefnir, the CST Lemmatizer for Icelandic, and Lemmald have not been evaluated using the same reference corpus. The accuracy of the three lemmatizers are, therefore, not directly comparable, but our results indicate that Nefnir obtains the highest accuracy.
Conclusion
We described and evaluated Nefnir, a new open source lemmatizer for Icelandic. It uses suffix substitution rules, derived from a large morphological database, to lemmatize tagged text. Evaluation shows that Nefnir obtains high accuracy for both correctly and automatically PoS-tagged input.
As taggers for Icelandic gradually get better, we can expect to see the lemmatization accuracy go up as well. Expanding the morphological database with more proper names may also help to achieve even higher accuracy. | a reference corpus of 21,093 tokens and their correct lemmas |
d0b005cb7ed6d4c307745096b2ed8762612480d2 | d0b005cb7ed6d4c307745096b2ed8762612480d2_0 | Q: What baseline is used to compare the experimental results against?
Text: Introduction
Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicity with specific activities BIBREF1. Recent work in natural language processing has found similar biases, such as in word embeddings BIBREF2, BIBREF3, BIBREF4, object classification BIBREF5, natural language inference BIBREF6, and coreference resolution BIBREF7. Less work has focused on the biases present in dialogue utterances BIBREF8, BIBREF9, despite bias being clearly present in human interactions, and the rapid development of dialogue agents for real-world use-cases, such as interactive assistants. In this work we aim to address this by focusing on mitigating gender bias.
We use the dialogue dataset from the LIGHT text adventure world BIBREF0 as a testbed for our investigation into de-biasing dialogues. The dataset consists of a set of crowd-sourced locations, characters, and objects, which form the backdrop for the dialogues between characters. In the dialogue creation phase, crowdworkers are presented with personas for characters—which themselves were written by other crowdworkers—that they should enact; the dialogues the crowdworkers generate from these personas form the dialogue dataset. Dialogue datasets are susceptible to reflecting the biases of the crowdworkers as they are often collected solely via crowdsourcing. Further, the game's medieval setting may encourage crowdworkers to generate text which accentuates the historical biases and inequalities of that time period BIBREF10, BIBREF11. However, despite the fact that the dialogues take place in a fantasy adventure world, LIGHT is a game and thus we are under no obligation to recreate historical biases in this environment, and can instead use creative license to shape it into a fun world with gender parity.
We use the dialogues in LIGHT because we find that it is highly imbalanced with respect to gender: there are over 60% more male-gendered characters than female. We primarily address the discrepancy in the representation of male and female genders, although there are many characters that are gender neutral (like “trees") or for which the gender could not be determined. We did not find any explicitly identified non-binary characters. We note that this is a bias in and of itself, and should be addressed in future work. We show that training on gender biased data leads existing generative dialogue models to amplify gender bias further. To offset this, we collect additional in-domain personas and dialogues to balance gender and increase the diversity of personas in the dataset. Next, we combine this approach with Counterfactual Data Augmentation and methods for controllable text generation to mitigate the bias in dialogue generation. Our proposed techniques create models that produce engaging responses with less gender bias.
Sources of Bias in Dialogue Datasets ::: Bias in Character Personas
Recent work in dialogue incorporates personas, or personality descriptions that ground speaker's chat, such as I love fishing BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Personas have been shown to increase engagingness and improve consistency. However, they can be a starting point for bias BIBREF17, BIBREF18, BIBREF9, as bias in the personas propagates to subsequent conversations.
Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Qualitative Examination.
Analyzing the personas in LIGHT qualitatively, we find many examples of bias. For example, the character girl contains the line I regularly clean and cook dinner. Further examples are given in Table TABREF1.
Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Quantitative Examination.
We quantitatively analyze bias by first examining whether the existing personas are offensive, and second, evaluating their gender balance. To assess the pervasiveness of unsafe content present in personas, we asked three independent annotators to examine each character's persona for potentially offensive content. If annotators selected that the content was offensive or maybe offensive, they were asked to place it in one of four categories – racist, sexist, classist, other – and to provide a reason for their response. Just over 2% of personas were flagged by at least one annotator, and these personas are removed from the dataset.
We further examined gender bias in personas. Annotators were asked to label the gender of each character based on their persona description (choosing “neutral" if it was not explicit in the persona). This annotation is possible because some personas include lines such as I am a young woman, although the majority of personas do not mention an explicit gender. Annotators found nearly 50% more male-gendered characters than female-gendered characters (Table TABREF5).
While annotators labeled personas as explicitly male, female, or gender-neutral, gender bias may still exist in personas beyond explicit sentences such as I am a young man. For example, personas can contain gendered references such as I want to follow in my father's footsteps rather than mother's footsteps. These relational nouns BIBREF19, BIBREF20 such as father encode a specific relationship that can be gender biased. In this example, that relationship would be between the character and a man, rather than a woman. We analyzed the frequency of references to other gendered characters in the personas by counting the appearance of gendered words using the list compiled by BIBREF21 (for example he vs. she), and find that men are disproportionately referred to in the personas: there are nearly 3x as many mentions of men than women.
Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances
After analyzing the bias in LIGHT personas, we go on to analyze the bias in dialogues created from those personas and how to quantify it.
Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Qualitative Examination.
In our analysis, we found many examples of biased utterances in the data used to train dialogue agents. For example, the character with a queen persona utters the line I spend my days embroidery and having a talk with the ladies. Another character in a dialogue admires a sultry wench with fire in her eyes. An example of persona bias propagating to the dialogue can be found in Table TABREF2.
Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Measuring Bias.
Sexism is clearly present in many datasets BIBREF9, but finding a good way to measure sexism, especially at scale, can be challenging. A simple answer would be to rely on crowdworkers operating under their own notions of “sexism” to annotate the dialogues. However, in our experience, crowdworkers hold a range of views, often different from ours, as to what counts as sexism, making mere human evaluation far from sufficient. Note that the original LIGHT personas and dialogues were generated by crowdworkers, leaving little reason to believe that crowdworkers will be proficient at spotting the sexism that they themselves embued the dataset with in the first place. Therefore, we supplement our crowdworker-collected human annotations of gender bias with additional quantitative measurements: we measure the ratio of gendered words (taken from the union of several existing gendered word lists that were each created through either automatic means, or by experts BIBREF21, BIBREF22, BIBREF23), and we run an existing dialogue safety classifier BIBREF24 to measure offensiveness of the dialogues.
Methodology: Mitigating Bias in Generative Dialogue
We explore both data augmentation and algorithmic methods to mitigate bias in generative Transformer dialogue models. We describe first our modeling setting and then the three proposed techniques for mitigating bias. Using (i) counterfactual data augmentation BIBREF25 to swap gendered words and (ii) additional data collection with crowdworkers, we create a gender-balanced dataset. Further, (iii) we describe a controllable generation method which moderates the male and female gendered words it produces.
Methodology: Mitigating Bias in Generative Dialogue ::: Models
Following BIBREF0, in all of our experiments we fine-tune a large, pre-trained Transformer encoder-decoder neural network on the dialogues in the LIGHT dataset. The model was pre-trained on Reddit conversations, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io. During pre-training, models were trained to generate a comment conditioned on the full thread leading up to the comment. Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments, resulting in approximately $2,200$ million training examples. The model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of BIBREF26. For generation, we decode sequences with beam search with beam size 5.
Methodology: Mitigating Bias in Generative Dialogue ::: Counterfactual Data Augmentation
One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection
To create a more gender-balanced dataset, we collect additional data using a Positive-Bias Data Collection (Pos. Data) strategy.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: Gender-swapping Existing Personas
There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New and Diverse characters
As discussed in Section SECREF2, it is insufficient to simply balance references to men and women in the dataset, as there may be bias in the form of sexism. While it is challenging to detect sexism, we attempt to offset this type of bias by collecting a set of interesting and independent characters. We do this by seeding workers with examples like adventurer with the persona I am an woman passionate about exploring a world I have not yet seen. I embark on ambitious adventures. We give the additional instruction to attempt to create diverse characters. Even with this instruction, crowdworkers still created roughly 3x as many male-gendered characters as female-gendered characters. We exclude male-gendered characters created in this fashion.
In combination with the gender swapped personas above, this yields a new set of 2,676 character personas (compared to 1,877 from the original dataset), for which the number of men and women and the number of references to male or female gendered words is roughly balanced: see Table TABREF5.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New dialogues
Finally, we collect additional dialogues with these newly created gender balanced character personas, favoring conversations that feature female gendered characters to offset the imbalance in the original data. We added further instructions for annotators to be mindful of gender bias during their conversations, and in particular to assume equality between genders – social, economic, political, or otherwise – in this fantasy setting. In total, we collect 507 new dialogues containing 6,658 new dialogue utterances in total (about 6% of the size of the full LIGHT dataset).
Methodology: Mitigating Bias in Generative Dialogue ::: Conditional Training
Bias in dialogue can manifest itself in various forms, but one form is the imbalanced use of gendered words. For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output BIBREF27, BIBREF28, BIBREF29, BIBREF30. Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties BIBREF28, then modifying the control tokens during inference to produce the desired result.
Prior to training, each dialogue response is binned into one of four bins – $\text{F}^{0/+}\text{M}^{0/+}$ – where $\text{F}^{0}$ indicates that there are zero female gendered words in the response and $\text{F}^{+}$ indicates the presence of at least one female gendered word. The gendered words are determined via an aggregation of existing lists of gendered nouns and adjectives from BIBREF21, BIBREF22, BIBREF23. The bins are used to train a conditional model by appending a special token (indicating the bin for the target response) to the end of the input which is given to the encoder. At inference time, the bins can be manipulated to produce dialogue outputs with various quantities of gendered words.
Results
We train generative Transformer models using each of these methods – Counterfactual Data Augmentation that augments with swaps of gendered words (CDA, §SECREF19), adding new dialogues (Positive-Bias Data Collection, §SECREF20), and controllable generation to control the quantity of gendered words (CT, §SECREF24) – and finally combine all of these methods together (ALL).
Results ::: Bias is Amplified in Generation
Existing Transformer generative dialogue models BIBREF31, BIBREF32, BIBREF0 are trained to take as input the dialogue context and generate the next utterance. Previous work has shown that machine learning models reflect the biases present in data BIBREF4, BIBREF3, and that these biases can be easy to learn compared to more challenging reasoning BIBREF2, BIBREF33. Generative models often use beam search or top-k sampling BIBREF34 to decode, and these methods are well-known to produce generic text BIBREF35, which makes them susceptible statistical biases present in datasets.
As shown in Table TABREF11, we find that existing models actually amplify bias. When the trained model generates gendered words (i.e., words from our gendered word list), it generates male-gendered words the vast majority of the time – even on utterances for which it is supposed to generate only female-gendered words (i.e., the gold label only contains female-gendered words), it generates male-gendered words nearly $78\%$ of the time.
Additionally, following BIBREF8, we run an offensive language classifier on the gold responses and the model generated utterances (Table TABREF16) and find that the model produces more offensive utterances than exist in the dataset.
Results ::: Genderedness of Generated Text
We analyze the performance of the various techniques by dividing the test set using the four genderedness bins – $\text{F}^{0}\text{M}^{0}$, $\text{F}^{0}\text{M}^{+}$, $\text{F}^{+}\text{M}^{0}$, and $\text{F}^{+}\text{M}^{+}$ – and calculate the F1 word overlap with the gold response, the percentage of gendered words generated (% gend. words), and the percentage of male-gendered words generated (relative to the sum total of gendered words generated by the model). We compare to the gold labels from the test set and a baseline model that does not use any of the bias mitigation techniques. Results for all methods are displayed in Table TABREF11.
Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\text{F}^{0}\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth.
Results ::: Conditional Training Controls Gendered Words
Our proposed CT method can be used to control the use of gendered words in generated dialogues. We examine the effect of such training by generating responses on the test set by conditioning the ALL model on a singular bin for all examples. Results are shown in Figure FIGREF12. Changing the bin radically changes the genderedness of generated text without significant changes to F1.
Examples of generated text from both the baseline and the ALL model are shown in Table TABREF31. The baseline model generates male-gendered words even when the gold response contains no gendered words or only female-gendered words, even generating unlikely sequences such as “my name is abigail. i am the king of this kingdom.".
Results ::: Safety of Generated Text
Using a dialogue safety classifier BIBREF24, we find that our proposed de-biased models are rated as less offensive compared to the baseline generative Transformer and the LIGHT data (see Table TABREF16).
Results ::: Human Evaluation
Finally, we use human evaluation to compare the quality of our de-biasing methods. We use the dialogue evaluation system Acute-Eval BIBREF36 to ask human evaluators to compare two conversations from different models and decide which model is more biased and which model is more engaging. Following Acute-Eval, we collect 100 human and model paired chats. Conversations from a human and baseline model are compared to conversations from a human and the ALL model with all generations set to the $\text{F}^{0}\text{M}^{0}$ gender-neutral control bin. Evaluators are asked which model is more engaging and for which model they find it more difficult to predict the gender of the speaker. We found that asking about difficulty of predicting a speaker's gender was much more effective than asking evaluators to evaluate sexism or gender bias. Figure FIGREF17 shows that evaluators rate the ALL model harder to predict the gender of (statistically significant at $p < 0.01$) while engagingness does not change. Our proposed methods are able to mitigate gender bias without degrading dialogue quality.
Conclusion
We analyze gender bias in dialogue and propose a general purpose method for understanding and mitigating bias in character personas and their associated dialogues. We present techniques using data augmentation and controllable generation to reduce gender bias in neural language generation for dialogue. We use the dataset LIGHT as a testbed for this work. By integrating these methods together, our models provide control over how gendered dialogue is and decrease the offensiveness of the generated utterances. Overall, our proposed methodology reduces the effect of bias while maintaining dialogue engagingness. | Transformer generation model |
9d9b11f86a96c6d3dd862453bf240d6e018e75af | 9d9b11f86a96c6d3dd862453bf240d6e018e75af_0 | Q: How does counterfactual data augmentation aim to tackle bias?
Text: Introduction
Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicity with specific activities BIBREF1. Recent work in natural language processing has found similar biases, such as in word embeddings BIBREF2, BIBREF3, BIBREF4, object classification BIBREF5, natural language inference BIBREF6, and coreference resolution BIBREF7. Less work has focused on the biases present in dialogue utterances BIBREF8, BIBREF9, despite bias being clearly present in human interactions, and the rapid development of dialogue agents for real-world use-cases, such as interactive assistants. In this work we aim to address this by focusing on mitigating gender bias.
We use the dialogue dataset from the LIGHT text adventure world BIBREF0 as a testbed for our investigation into de-biasing dialogues. The dataset consists of a set of crowd-sourced locations, characters, and objects, which form the backdrop for the dialogues between characters. In the dialogue creation phase, crowdworkers are presented with personas for characters—which themselves were written by other crowdworkers—that they should enact; the dialogues the crowdworkers generate from these personas form the dialogue dataset. Dialogue datasets are susceptible to reflecting the biases of the crowdworkers as they are often collected solely via crowdsourcing. Further, the game's medieval setting may encourage crowdworkers to generate text which accentuates the historical biases and inequalities of that time period BIBREF10, BIBREF11. However, despite the fact that the dialogues take place in a fantasy adventure world, LIGHT is a game and thus we are under no obligation to recreate historical biases in this environment, and can instead use creative license to shape it into a fun world with gender parity.
We use the dialogues in LIGHT because we find that it is highly imbalanced with respect to gender: there are over 60% more male-gendered characters than female. We primarily address the discrepancy in the representation of male and female genders, although there are many characters that are gender neutral (like “trees") or for which the gender could not be determined. We did not find any explicitly identified non-binary characters. We note that this is a bias in and of itself, and should be addressed in future work. We show that training on gender biased data leads existing generative dialogue models to amplify gender bias further. To offset this, we collect additional in-domain personas and dialogues to balance gender and increase the diversity of personas in the dataset. Next, we combine this approach with Counterfactual Data Augmentation and methods for controllable text generation to mitigate the bias in dialogue generation. Our proposed techniques create models that produce engaging responses with less gender bias.
Sources of Bias in Dialogue Datasets ::: Bias in Character Personas
Recent work in dialogue incorporates personas, or personality descriptions that ground speaker's chat, such as I love fishing BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Personas have been shown to increase engagingness and improve consistency. However, they can be a starting point for bias BIBREF17, BIBREF18, BIBREF9, as bias in the personas propagates to subsequent conversations.
Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Qualitative Examination.
Analyzing the personas in LIGHT qualitatively, we find many examples of bias. For example, the character girl contains the line I regularly clean and cook dinner. Further examples are given in Table TABREF1.
Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Quantitative Examination.
We quantitatively analyze bias by first examining whether the existing personas are offensive, and second, evaluating their gender balance. To assess the pervasiveness of unsafe content present in personas, we asked three independent annotators to examine each character's persona for potentially offensive content. If annotators selected that the content was offensive or maybe offensive, they were asked to place it in one of four categories – racist, sexist, classist, other – and to provide a reason for their response. Just over 2% of personas were flagged by at least one annotator, and these personas are removed from the dataset.
We further examined gender bias in personas. Annotators were asked to label the gender of each character based on their persona description (choosing “neutral" if it was not explicit in the persona). This annotation is possible because some personas include lines such as I am a young woman, although the majority of personas do not mention an explicit gender. Annotators found nearly 50% more male-gendered characters than female-gendered characters (Table TABREF5).
While annotators labeled personas as explicitly male, female, or gender-neutral, gender bias may still exist in personas beyond explicit sentences such as I am a young man. For example, personas can contain gendered references such as I want to follow in my father's footsteps rather than mother's footsteps. These relational nouns BIBREF19, BIBREF20 such as father encode a specific relationship that can be gender biased. In this example, that relationship would be between the character and a man, rather than a woman. We analyzed the frequency of references to other gendered characters in the personas by counting the appearance of gendered words using the list compiled by BIBREF21 (for example he vs. she), and find that men are disproportionately referred to in the personas: there are nearly 3x as many mentions of men than women.
Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances
After analyzing the bias in LIGHT personas, we go on to analyze the bias in dialogues created from those personas and how to quantify it.
Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Qualitative Examination.
In our analysis, we found many examples of biased utterances in the data used to train dialogue agents. For example, the character with a queen persona utters the line I spend my days embroidery and having a talk with the ladies. Another character in a dialogue admires a sultry wench with fire in her eyes. An example of persona bias propagating to the dialogue can be found in Table TABREF2.
Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Measuring Bias.
Sexism is clearly present in many datasets BIBREF9, but finding a good way to measure sexism, especially at scale, can be challenging. A simple answer would be to rely on crowdworkers operating under their own notions of “sexism” to annotate the dialogues. However, in our experience, crowdworkers hold a range of views, often different from ours, as to what counts as sexism, making mere human evaluation far from sufficient. Note that the original LIGHT personas and dialogues were generated by crowdworkers, leaving little reason to believe that crowdworkers will be proficient at spotting the sexism that they themselves embued the dataset with in the first place. Therefore, we supplement our crowdworker-collected human annotations of gender bias with additional quantitative measurements: we measure the ratio of gendered words (taken from the union of several existing gendered word lists that were each created through either automatic means, or by experts BIBREF21, BIBREF22, BIBREF23), and we run an existing dialogue safety classifier BIBREF24 to measure offensiveness of the dialogues.
Methodology: Mitigating Bias in Generative Dialogue
We explore both data augmentation and algorithmic methods to mitigate bias in generative Transformer dialogue models. We describe first our modeling setting and then the three proposed techniques for mitigating bias. Using (i) counterfactual data augmentation BIBREF25 to swap gendered words and (ii) additional data collection with crowdworkers, we create a gender-balanced dataset. Further, (iii) we describe a controllable generation method which moderates the male and female gendered words it produces.
Methodology: Mitigating Bias in Generative Dialogue ::: Models
Following BIBREF0, in all of our experiments we fine-tune a large, pre-trained Transformer encoder-decoder neural network on the dialogues in the LIGHT dataset. The model was pre-trained on Reddit conversations, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io. During pre-training, models were trained to generate a comment conditioned on the full thread leading up to the comment. Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments, resulting in approximately $2,200$ million training examples. The model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of BIBREF26. For generation, we decode sequences with beam search with beam size 5.
Methodology: Mitigating Bias in Generative Dialogue ::: Counterfactual Data Augmentation
One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection
To create a more gender-balanced dataset, we collect additional data using a Positive-Bias Data Collection (Pos. Data) strategy.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: Gender-swapping Existing Personas
There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New and Diverse characters
As discussed in Section SECREF2, it is insufficient to simply balance references to men and women in the dataset, as there may be bias in the form of sexism. While it is challenging to detect sexism, we attempt to offset this type of bias by collecting a set of interesting and independent characters. We do this by seeding workers with examples like adventurer with the persona I am an woman passionate about exploring a world I have not yet seen. I embark on ambitious adventures. We give the additional instruction to attempt to create diverse characters. Even with this instruction, crowdworkers still created roughly 3x as many male-gendered characters as female-gendered characters. We exclude male-gendered characters created in this fashion.
In combination with the gender swapped personas above, this yields a new set of 2,676 character personas (compared to 1,877 from the original dataset), for which the number of men and women and the number of references to male or female gendered words is roughly balanced: see Table TABREF5.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New dialogues
Finally, we collect additional dialogues with these newly created gender balanced character personas, favoring conversations that feature female gendered characters to offset the imbalance in the original data. We added further instructions for annotators to be mindful of gender bias during their conversations, and in particular to assume equality between genders – social, economic, political, or otherwise – in this fantasy setting. In total, we collect 507 new dialogues containing 6,658 new dialogue utterances in total (about 6% of the size of the full LIGHT dataset).
Methodology: Mitigating Bias in Generative Dialogue ::: Conditional Training
Bias in dialogue can manifest itself in various forms, but one form is the imbalanced use of gendered words. For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output BIBREF27, BIBREF28, BIBREF29, BIBREF30. Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties BIBREF28, then modifying the control tokens during inference to produce the desired result.
Prior to training, each dialogue response is binned into one of four bins – $\text{F}^{0/+}\text{M}^{0/+}$ – where $\text{F}^{0}$ indicates that there are zero female gendered words in the response and $\text{F}^{+}$ indicates the presence of at least one female gendered word. The gendered words are determined via an aggregation of existing lists of gendered nouns and adjectives from BIBREF21, BIBREF22, BIBREF23. The bins are used to train a conditional model by appending a special token (indicating the bin for the target response) to the end of the input which is given to the encoder. At inference time, the bins can be manipulated to produce dialogue outputs with various quantities of gendered words.
Results
We train generative Transformer models using each of these methods – Counterfactual Data Augmentation that augments with swaps of gendered words (CDA, §SECREF19), adding new dialogues (Positive-Bias Data Collection, §SECREF20), and controllable generation to control the quantity of gendered words (CT, §SECREF24) – and finally combine all of these methods together (ALL).
Results ::: Bias is Amplified in Generation
Existing Transformer generative dialogue models BIBREF31, BIBREF32, BIBREF0 are trained to take as input the dialogue context and generate the next utterance. Previous work has shown that machine learning models reflect the biases present in data BIBREF4, BIBREF3, and that these biases can be easy to learn compared to more challenging reasoning BIBREF2, BIBREF33. Generative models often use beam search or top-k sampling BIBREF34 to decode, and these methods are well-known to produce generic text BIBREF35, which makes them susceptible statistical biases present in datasets.
As shown in Table TABREF11, we find that existing models actually amplify bias. When the trained model generates gendered words (i.e., words from our gendered word list), it generates male-gendered words the vast majority of the time – even on utterances for which it is supposed to generate only female-gendered words (i.e., the gold label only contains female-gendered words), it generates male-gendered words nearly $78\%$ of the time.
Additionally, following BIBREF8, we run an offensive language classifier on the gold responses and the model generated utterances (Table TABREF16) and find that the model produces more offensive utterances than exist in the dataset.
Results ::: Genderedness of Generated Text
We analyze the performance of the various techniques by dividing the test set using the four genderedness bins – $\text{F}^{0}\text{M}^{0}$, $\text{F}^{0}\text{M}^{+}$, $\text{F}^{+}\text{M}^{0}$, and $\text{F}^{+}\text{M}^{+}$ – and calculate the F1 word overlap with the gold response, the percentage of gendered words generated (% gend. words), and the percentage of male-gendered words generated (relative to the sum total of gendered words generated by the model). We compare to the gold labels from the test set and a baseline model that does not use any of the bias mitigation techniques. Results for all methods are displayed in Table TABREF11.
Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\text{F}^{0}\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth.
Results ::: Conditional Training Controls Gendered Words
Our proposed CT method can be used to control the use of gendered words in generated dialogues. We examine the effect of such training by generating responses on the test set by conditioning the ALL model on a singular bin for all examples. Results are shown in Figure FIGREF12. Changing the bin radically changes the genderedness of generated text without significant changes to F1.
Examples of generated text from both the baseline and the ALL model are shown in Table TABREF31. The baseline model generates male-gendered words even when the gold response contains no gendered words or only female-gendered words, even generating unlikely sequences such as “my name is abigail. i am the king of this kingdom.".
Results ::: Safety of Generated Text
Using a dialogue safety classifier BIBREF24, we find that our proposed de-biased models are rated as less offensive compared to the baseline generative Transformer and the LIGHT data (see Table TABREF16).
Results ::: Human Evaluation
Finally, we use human evaluation to compare the quality of our de-biasing methods. We use the dialogue evaluation system Acute-Eval BIBREF36 to ask human evaluators to compare two conversations from different models and decide which model is more biased and which model is more engaging. Following Acute-Eval, we collect 100 human and model paired chats. Conversations from a human and baseline model are compared to conversations from a human and the ALL model with all generations set to the $\text{F}^{0}\text{M}^{0}$ gender-neutral control bin. Evaluators are asked which model is more engaging and for which model they find it more difficult to predict the gender of the speaker. We found that asking about difficulty of predicting a speaker's gender was much more effective than asking evaluators to evaluate sexism or gender bias. Figure FIGREF17 shows that evaluators rate the ALL model harder to predict the gender of (statistically significant at $p < 0.01$) while engagingness does not change. Our proposed methods are able to mitigate gender bias without degrading dialogue quality.
Conclusion
We analyze gender bias in dialogue and propose a general purpose method for understanding and mitigating bias in character personas and their associated dialogues. We present techniques using data augmentation and controllable generation to reduce gender bias in neural language generation for dialogue. We use the dataset LIGHT as a testbed for this work. By integrating these methods together, our models provide control over how gendered dialogue is and decrease the offensiveness of the generated utterances. Overall, our proposed methodology reduces the effect of bias while maintaining dialogue engagingness. | The training dataset is augmented by swapping all gendered words by their other gender counterparts |
415f35adb0ef746883fb9c33aa53b79cc4e723c3 | 415f35adb0ef746883fb9c33aa53b79cc4e723c3_0 | Q: In the targeted data collection approach, what type of data is targetted?
Text: Introduction
Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicity with specific activities BIBREF1. Recent work in natural language processing has found similar biases, such as in word embeddings BIBREF2, BIBREF3, BIBREF4, object classification BIBREF5, natural language inference BIBREF6, and coreference resolution BIBREF7. Less work has focused on the biases present in dialogue utterances BIBREF8, BIBREF9, despite bias being clearly present in human interactions, and the rapid development of dialogue agents for real-world use-cases, such as interactive assistants. In this work we aim to address this by focusing on mitigating gender bias.
We use the dialogue dataset from the LIGHT text adventure world BIBREF0 as a testbed for our investigation into de-biasing dialogues. The dataset consists of a set of crowd-sourced locations, characters, and objects, which form the backdrop for the dialogues between characters. In the dialogue creation phase, crowdworkers are presented with personas for characters—which themselves were written by other crowdworkers—that they should enact; the dialogues the crowdworkers generate from these personas form the dialogue dataset. Dialogue datasets are susceptible to reflecting the biases of the crowdworkers as they are often collected solely via crowdsourcing. Further, the game's medieval setting may encourage crowdworkers to generate text which accentuates the historical biases and inequalities of that time period BIBREF10, BIBREF11. However, despite the fact that the dialogues take place in a fantasy adventure world, LIGHT is a game and thus we are under no obligation to recreate historical biases in this environment, and can instead use creative license to shape it into a fun world with gender parity.
We use the dialogues in LIGHT because we find that it is highly imbalanced with respect to gender: there are over 60% more male-gendered characters than female. We primarily address the discrepancy in the representation of male and female genders, although there are many characters that are gender neutral (like “trees") or for which the gender could not be determined. We did not find any explicitly identified non-binary characters. We note that this is a bias in and of itself, and should be addressed in future work. We show that training on gender biased data leads existing generative dialogue models to amplify gender bias further. To offset this, we collect additional in-domain personas and dialogues to balance gender and increase the diversity of personas in the dataset. Next, we combine this approach with Counterfactual Data Augmentation and methods for controllable text generation to mitigate the bias in dialogue generation. Our proposed techniques create models that produce engaging responses with less gender bias.
Sources of Bias in Dialogue Datasets ::: Bias in Character Personas
Recent work in dialogue incorporates personas, or personality descriptions that ground speaker's chat, such as I love fishing BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Personas have been shown to increase engagingness and improve consistency. However, they can be a starting point for bias BIBREF17, BIBREF18, BIBREF9, as bias in the personas propagates to subsequent conversations.
Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Qualitative Examination.
Analyzing the personas in LIGHT qualitatively, we find many examples of bias. For example, the character girl contains the line I regularly clean and cook dinner. Further examples are given in Table TABREF1.
Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Quantitative Examination.
We quantitatively analyze bias by first examining whether the existing personas are offensive, and second, evaluating their gender balance. To assess the pervasiveness of unsafe content present in personas, we asked three independent annotators to examine each character's persona for potentially offensive content. If annotators selected that the content was offensive or maybe offensive, they were asked to place it in one of four categories – racist, sexist, classist, other – and to provide a reason for their response. Just over 2% of personas were flagged by at least one annotator, and these personas are removed from the dataset.
We further examined gender bias in personas. Annotators were asked to label the gender of each character based on their persona description (choosing “neutral" if it was not explicit in the persona). This annotation is possible because some personas include lines such as I am a young woman, although the majority of personas do not mention an explicit gender. Annotators found nearly 50% more male-gendered characters than female-gendered characters (Table TABREF5).
While annotators labeled personas as explicitly male, female, or gender-neutral, gender bias may still exist in personas beyond explicit sentences such as I am a young man. For example, personas can contain gendered references such as I want to follow in my father's footsteps rather than mother's footsteps. These relational nouns BIBREF19, BIBREF20 such as father encode a specific relationship that can be gender biased. In this example, that relationship would be between the character and a man, rather than a woman. We analyzed the frequency of references to other gendered characters in the personas by counting the appearance of gendered words using the list compiled by BIBREF21 (for example he vs. she), and find that men are disproportionately referred to in the personas: there are nearly 3x as many mentions of men than women.
Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances
After analyzing the bias in LIGHT personas, we go on to analyze the bias in dialogues created from those personas and how to quantify it.
Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Qualitative Examination.
In our analysis, we found many examples of biased utterances in the data used to train dialogue agents. For example, the character with a queen persona utters the line I spend my days embroidery and having a talk with the ladies. Another character in a dialogue admires a sultry wench with fire in her eyes. An example of persona bias propagating to the dialogue can be found in Table TABREF2.
Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Measuring Bias.
Sexism is clearly present in many datasets BIBREF9, but finding a good way to measure sexism, especially at scale, can be challenging. A simple answer would be to rely on crowdworkers operating under their own notions of “sexism” to annotate the dialogues. However, in our experience, crowdworkers hold a range of views, often different from ours, as to what counts as sexism, making mere human evaluation far from sufficient. Note that the original LIGHT personas and dialogues were generated by crowdworkers, leaving little reason to believe that crowdworkers will be proficient at spotting the sexism that they themselves embued the dataset with in the first place. Therefore, we supplement our crowdworker-collected human annotations of gender bias with additional quantitative measurements: we measure the ratio of gendered words (taken from the union of several existing gendered word lists that were each created through either automatic means, or by experts BIBREF21, BIBREF22, BIBREF23), and we run an existing dialogue safety classifier BIBREF24 to measure offensiveness of the dialogues.
Methodology: Mitigating Bias in Generative Dialogue
We explore both data augmentation and algorithmic methods to mitigate bias in generative Transformer dialogue models. We describe first our modeling setting and then the three proposed techniques for mitigating bias. Using (i) counterfactual data augmentation BIBREF25 to swap gendered words and (ii) additional data collection with crowdworkers, we create a gender-balanced dataset. Further, (iii) we describe a controllable generation method which moderates the male and female gendered words it produces.
Methodology: Mitigating Bias in Generative Dialogue ::: Models
Following BIBREF0, in all of our experiments we fine-tune a large, pre-trained Transformer encoder-decoder neural network on the dialogues in the LIGHT dataset. The model was pre-trained on Reddit conversations, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io. During pre-training, models were trained to generate a comment conditioned on the full thread leading up to the comment. Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments, resulting in approximately $2,200$ million training examples. The model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of BIBREF26. For generation, we decode sequences with beam search with beam size 5.
Methodology: Mitigating Bias in Generative Dialogue ::: Counterfactual Data Augmentation
One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection
To create a more gender-balanced dataset, we collect additional data using a Positive-Bias Data Collection (Pos. Data) strategy.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: Gender-swapping Existing Personas
There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New and Diverse characters
As discussed in Section SECREF2, it is insufficient to simply balance references to men and women in the dataset, as there may be bias in the form of sexism. While it is challenging to detect sexism, we attempt to offset this type of bias by collecting a set of interesting and independent characters. We do this by seeding workers with examples like adventurer with the persona I am an woman passionate about exploring a world I have not yet seen. I embark on ambitious adventures. We give the additional instruction to attempt to create diverse characters. Even with this instruction, crowdworkers still created roughly 3x as many male-gendered characters as female-gendered characters. We exclude male-gendered characters created in this fashion.
In combination with the gender swapped personas above, this yields a new set of 2,676 character personas (compared to 1,877 from the original dataset), for which the number of men and women and the number of references to male or female gendered words is roughly balanced: see Table TABREF5.
Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New dialogues
Finally, we collect additional dialogues with these newly created gender balanced character personas, favoring conversations that feature female gendered characters to offset the imbalance in the original data. We added further instructions for annotators to be mindful of gender bias during their conversations, and in particular to assume equality between genders – social, economic, political, or otherwise – in this fantasy setting. In total, we collect 507 new dialogues containing 6,658 new dialogue utterances in total (about 6% of the size of the full LIGHT dataset).
Methodology: Mitigating Bias in Generative Dialogue ::: Conditional Training
Bias in dialogue can manifest itself in various forms, but one form is the imbalanced use of gendered words. For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output BIBREF27, BIBREF28, BIBREF29, BIBREF30. Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties BIBREF28, then modifying the control tokens during inference to produce the desired result.
Prior to training, each dialogue response is binned into one of four bins – $\text{F}^{0/+}\text{M}^{0/+}$ – where $\text{F}^{0}$ indicates that there are zero female gendered words in the response and $\text{F}^{+}$ indicates the presence of at least one female gendered word. The gendered words are determined via an aggregation of existing lists of gendered nouns and adjectives from BIBREF21, BIBREF22, BIBREF23. The bins are used to train a conditional model by appending a special token (indicating the bin for the target response) to the end of the input which is given to the encoder. At inference time, the bins can be manipulated to produce dialogue outputs with various quantities of gendered words.
Results
We train generative Transformer models using each of these methods – Counterfactual Data Augmentation that augments with swaps of gendered words (CDA, §SECREF19), adding new dialogues (Positive-Bias Data Collection, §SECREF20), and controllable generation to control the quantity of gendered words (CT, §SECREF24) – and finally combine all of these methods together (ALL).
Results ::: Bias is Amplified in Generation
Existing Transformer generative dialogue models BIBREF31, BIBREF32, BIBREF0 are trained to take as input the dialogue context and generate the next utterance. Previous work has shown that machine learning models reflect the biases present in data BIBREF4, BIBREF3, and that these biases can be easy to learn compared to more challenging reasoning BIBREF2, BIBREF33. Generative models often use beam search or top-k sampling BIBREF34 to decode, and these methods are well-known to produce generic text BIBREF35, which makes them susceptible statistical biases present in datasets.
As shown in Table TABREF11, we find that existing models actually amplify bias. When the trained model generates gendered words (i.e., words from our gendered word list), it generates male-gendered words the vast majority of the time – even on utterances for which it is supposed to generate only female-gendered words (i.e., the gold label only contains female-gendered words), it generates male-gendered words nearly $78\%$ of the time.
Additionally, following BIBREF8, we run an offensive language classifier on the gold responses and the model generated utterances (Table TABREF16) and find that the model produces more offensive utterances than exist in the dataset.
Results ::: Genderedness of Generated Text
We analyze the performance of the various techniques by dividing the test set using the four genderedness bins – $\text{F}^{0}\text{M}^{0}$, $\text{F}^{0}\text{M}^{+}$, $\text{F}^{+}\text{M}^{0}$, and $\text{F}^{+}\text{M}^{+}$ – and calculate the F1 word overlap with the gold response, the percentage of gendered words generated (% gend. words), and the percentage of male-gendered words generated (relative to the sum total of gendered words generated by the model). We compare to the gold labels from the test set and a baseline model that does not use any of the bias mitigation techniques. Results for all methods are displayed in Table TABREF11.
Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\text{F}^{0}\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth.
Results ::: Conditional Training Controls Gendered Words
Our proposed CT method can be used to control the use of gendered words in generated dialogues. We examine the effect of such training by generating responses on the test set by conditioning the ALL model on a singular bin for all examples. Results are shown in Figure FIGREF12. Changing the bin radically changes the genderedness of generated text without significant changes to F1.
Examples of generated text from both the baseline and the ALL model are shown in Table TABREF31. The baseline model generates male-gendered words even when the gold response contains no gendered words or only female-gendered words, even generating unlikely sequences such as “my name is abigail. i am the king of this kingdom.".
Results ::: Safety of Generated Text
Using a dialogue safety classifier BIBREF24, we find that our proposed de-biased models are rated as less offensive compared to the baseline generative Transformer and the LIGHT data (see Table TABREF16).
Results ::: Human Evaluation
Finally, we use human evaluation to compare the quality of our de-biasing methods. We use the dialogue evaluation system Acute-Eval BIBREF36 to ask human evaluators to compare two conversations from different models and decide which model is more biased and which model is more engaging. Following Acute-Eval, we collect 100 human and model paired chats. Conversations from a human and baseline model are compared to conversations from a human and the ALL model with all generations set to the $\text{F}^{0}\text{M}^{0}$ gender-neutral control bin. Evaluators are asked which model is more engaging and for which model they find it more difficult to predict the gender of the speaker. We found that asking about difficulty of predicting a speaker's gender was much more effective than asking evaluators to evaluate sexism or gender bias. Figure FIGREF17 shows that evaluators rate the ALL model harder to predict the gender of (statistically significant at $p < 0.01$) while engagingness does not change. Our proposed methods are able to mitigate gender bias without degrading dialogue quality.
Conclusion
We analyze gender bias in dialogue and propose a general purpose method for understanding and mitigating bias in character personas and their associated dialogues. We present techniques using data augmentation and controllable generation to reduce gender bias in neural language generation for dialogue. We use the dataset LIGHT as a testbed for this work. By integrating these methods together, our models provide control over how gendered dialogue is and decrease the offensiveness of the generated utterances. Overall, our proposed methodology reduces the effect of bias while maintaining dialogue engagingness. | Gendered characters in the dataset |
52f1a91f546b8a25a5d72325c503ec8f9c72de23 | 52f1a91f546b8a25a5d72325c503ec8f9c72de23_0 | Q: Which language models do they compare against?
Text: Introduction
Text understanding starts with the challenge of finding machine-understandable representation that captures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably the most commonly used document representations. Despite its simplicity, BoW works surprisingly well for many tasks BIBREF0 . However, by treating words and phrases as unique and discrete symbols, BoW often fails to capture the similarity between words or phrases and also suffers from sparsity and high dimensionality.
Recent works on using neural networks to learn distributed vector representations of words have gained great popularity. The well celebrated Word2Vec BIBREF1 , by learning to predict the target word using its neighboring words, maps words of similar meanings to nearby points in the continuous vector space. The surprisingly simple model has succeeded in generating high-quality word embeddings for tasks such as language modeling, text understanding and machine translation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. It can be trained on billions of words per hour on a single machine.
Paragraph Vectors BIBREF2 generalize the idea to learn vector representation for documents. A target word is predicted by the word embeddings of its neighbors in together with a unique document vector learned for each document. It outperforms established document representations, such as BoW and Latent Dirichlet Allocation BIBREF3 , on various text understanding tasks BIBREF4 . However, two caveats come with this approach: 1) the number of parameters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensive to generate vector representations for unseen documents at test time.
We propose an efficient model architecture, referred to as Document Vector through Corruption (Doc2VecC), to learn vector representations for documents. It is motivated by the observation that linear operations on the word embeddings learned by Word2Vec can sustain substantial amount of syntactic and semantic meanings of a phrase or a sentence BIBREF5 . For example, vec(“Russia”) + vec(“river”) is close to vec(“Volga River”) BIBREF6 , and vec(“king”) - vec(“man”) + vec(“women”) is close to vec(“queen”) BIBREF5 . In Doc2VecC, we represent each document as a simple average of the word embeddings of all the words in the document. In contrast to existing approaches which post-process learned word embeddings to form document representation BIBREF7 , BIBREF8 , Doc2VecC enforces a meaningful document representation can be formed by averaging the word embeddings during learning. Furthermore, we include a corruption model that randomly remove words from a document during learning, a mechanism that is critical to the performance and learning speed of our algorithm.
Doc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupled from the size of the training corpus, depending only on the size of the vocabulary; 2. The model architecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. The new framework implicitly introduces a data-dependent regularization, which favors rare or informative words and suppresses words that are common but not discriminative; 4. Vector representation of a document can be generated by simply averaging the learned word embeddings of all the words in the document, which significantly boost test efficiency; 5. The vector representation generated by Doc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification as well as semantic relatedness tasks.
Related Works and Notations
Text representation learning has been extensively studied. Popular representations range from the simplest BoW and its term-frequency based variants BIBREF9 , language model based methods BIBREF10 , BIBREF11 , BIBREF12 , topic models BIBREF13 , BIBREF3 , Denoising Autoencoders and its variants BIBREF14 , BIBREF15 , and distributed vector representations BIBREF8 , BIBREF2 , BIBREF16 . Another prominent line of work includes learning task-specific document representation with deep neural networks, such as CNN BIBREF17 or LSTM based approaches BIBREF18 , BIBREF19 .
In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that are most similar to ours. There are two well-know model architectures used for both methods, referred to as Continuous Bag-of-Words (CBoW) and Skipgram models BIBREF1 . In this work, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we are going to use throughout the paper:
Method
Several works BIBREF6 , BIBREF5 showcased that syntactic and semantic regularities of phrases and sentences are reasonably well preserved by adding or subtracting word embeddings learned through Word2Vec. It prompts us to explore the option of simply representing a document as an average of word embeddings. Figure FIGREF9 illustrates the new model architecture.
Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layer as well as an output layer to predict the target word, “ceremony” in this example. The embeddings of neighboring words (“opening”, “for”, “the”) provide local context while the vector representation of the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors, which directly learns a unique vector for each document, Doc2VecC represents each document as an average of the embeddings of words randomly sampled from the document (“performance” at position INLINEFORM0 , “praised” at position INLINEFORM1 , and “brazil” at position INLINEFORM2 ). BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization, which brings great performance improvement.
Here we describe the stochastic process we used to generate a global context at each update. The global context, which we denote as INLINEFORM0 , is generated through a unbiased mask-out/drop-out corruption, in which we randomly overwrites each dimension of the original document INLINEFORM1 with probability INLINEFORM2 . To make the corruption unbiased, we set the uncorrupted dimensions to INLINEFORM3 times its original value. Formally, DISPLAYFORM0
Doc2VecC then defines the probability of observing a target word INLINEFORM0 given its local context INLINEFORM1 as well as the global context INLINEFORM2 as DISPLAYFORM0
Here INLINEFORM0 is the length of the document. Exactly computing the probability is impractical, instead we approximate it with negative sampling BIBREF1 . DISPLAYFORM0
here INLINEFORM0 stands for a uniform distribution over the terms in the vocabulary. The two projection matrices INLINEFORM1 and INLINEFORM2 are then learned to minimize the loss: DISPLAYFORM0
Given the learned projection matrix INLINEFORM0 , we then represent each document simply as an average of the embeddings of the words in the document, DISPLAYFORM0
We are going to elaborate next why we choose to corrupt the original document with the corruption model in eq.( EQREF10 ) during learning, and how it enables us to simply use the average word embeddings as the vector representation for documents at test time.
Corruption as data-dependent regularization
We approximate the log likelihood for each instance INLINEFORM0 in eq.( EQREF13 ) with its Taylor expansion with respect to INLINEFORM1 up to the second-order BIBREF26 , BIBREF27 , BIBREF28 . Concretely, we choose to expand at the mean of the corruption INLINEFORM2 : INLINEFORM3
where INLINEFORM0 and INLINEFORM1 are the first-order (i.e., gradient) and second-order (i.e., Hessian) of the log likelihood with respect to INLINEFORM2 . Expansion at the mean INLINEFORM3 is crucial as shown in the following steps. Let us assume that for each instance, we are going to sample the global context INLINEFORM4 infinitely many times, and thus compute the expected log likelihood with respect to the corrupted INLINEFORM5 . INLINEFORM6
The linear term disappears as INLINEFORM0 . We substitute in INLINEFORM1 for the mean INLINEFORM2 of the corrupting distribution (unbiased corruption) and the matrix INLINEFORM3 for the variance, and obtain DISPLAYFORM0
As each word in a document is corrupted independently of others, the variance matrix INLINEFORM0 is simplified to a diagonal matrix with INLINEFORM1 element equals INLINEFORM2 . As a result, we only need to compute the diagonal terms of the Hessian matrix INLINEFORM3 .
The INLINEFORM0 dimension of the Hessian's diagonal evaluated at the mean INLINEFORM1 is given by INLINEFORM2
Plug the Hessian matrix and the variance matrix back into eq.( EQREF16 ), and then back to the loss defined in eq.( EQREF13 ), we can see that Doc2VecC intrinsically minimizes DISPLAYFORM0
Each INLINEFORM0 in the first term measures the log likelihood of observing the target word INLINEFORM1 given its local context INLINEFORM2 and the document vector INLINEFORM3 . As such, Doc2VecC enforces that a document vector generated by averaging word embeddings can capture the global semantics of the document, and fill in information missed in the local context. The second term here is a data-dependent regularization. The regularization on the embedding INLINEFORM4 of each word INLINEFORM5 takes the following form, INLINEFORM6
where INLINEFORM0 prescribes the confidence of predicting the target word INLINEFORM1 given its neighboring context INLINEFORM2 as well as the document vector INLINEFORM3 .
Closely examining INLINEFORM0 leads to several interesting findings: 1. the regularizer penalizes more on the embeddings of common words. A word INLINEFORM1 that frequently appears across the training corpus, i.e, INLINEFORM2 often, will have a bigger regularization than a rare word; 2. on the other hand, the regularization is modulated by INLINEFORM3 , which is small if INLINEFORM4 . In other words, if INLINEFORM5 is critical to a confident prediction INLINEFORM6 when it is active, then the regularization is diminished. Similar effect was observed for dropout training for logistic regression model BIBREF27 and denoising autoencoders BIBREF28 .
Experiments
We evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semantic relatedness task, along with several document representation learning algorithms. All experiments can be reproduced using the code available at https://github.com/mchen24/iclr2017
Baselines
We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation learned from reconstructing original document INLINEFORM0 using corrupted one INLINEFORM1 . SDAs have been shown to be the state-of-the-art for sentiment analysis tasks BIBREF29 . We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of INLINEFORM2 in the reconstruction error and employed negative sampling for the remainings; Word2Vec BIBREF1 +IDF, a representation generated through weighted average of word vectors learned using Word2Vec; Doc2Vec BIBREF2 ; Skip-thought Vectors BIBREF16 , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to LSTM-based methods BIBREF18 that have been reported on this dataset.
Sentiment analysis
For sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviews categorized as either positive or negative. It comes with predefined train/test split BIBREF30 : 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. The two classes are balanced in the training and testing sets. We remove words that appear less than 10 times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.
Setup. We test the various representation learning algorithms under two settings: one follows the same protocol proposed in BIBREF8 , where representation is learned using all the available data, including the test set; another one where the representation is learned using training and unlabeled set only. For both settings, a linear support vector machine (SVM) BIBREF31 is trained afterwards on the learned representation for classification. For Skip-thought Vectors, we used the generic model trained on a much bigger book corpus to encode the documents. A vector of 4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, are generated for each document. In comparison, all the other algorithms produce a vector representation of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parameters are tuned on a validation set subsampled from the training set.
Accuracy. Comparing the two columns in Table TABREF20 , we can see that all the representation learning algorithms benefits from including the testing data during the representation learning phrase. Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methods outperforms the other baselines, beating the BOW representation by 15%. In comparison with Word2Vec+IDF, which applies post-processing on learned word embeddings to form document representation, Doc2VecC naturally enforces document semantics to be captured by averaged word embeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Autoencoders (DEA) if the local context words are removed from the paradigm shown in Figure FIGREF9 . By including the context words, Doc2VecC allows the document vector to focus more on capturing the global context. Skip-thought vectors perform surprisingly poor on this dataset comparing to other methods. We hypothesized that it is due to the length of paragraphs in this dataset. The average length of paragraphs in the IMDB movie review dataset is INLINEFORM0 , much longer than the ones used for training and testing in the original paper, which is in the order of 10. As noted in BIBREF18 , the performance of LSTM based method (similarly, the gated RNN used in Skip-thought vectors) drops significantly with increasing paragraph length, as it is hard to preserve state over long sequences of words.
Time. Table TABREF22 summarizes the time required by these algorithms to learn and generate the document representation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC second that. The number of parameters that needs to be back-propagated in each update was increased by the number of surviving words in INLINEFORM0 . We found that both models are not sensitive to the corruption rate INLINEFORM1 in the noise model. Since the learning time decreases with higher corruption rate, we used INLINEFORM2 throughout the experiments. Paragraph Vectors takes longer time to train as there are more parameters (linear to the number of document in the learning set) to learn. At test time, Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document representation. Paragraph Vectors, on the other hand, requires another round of inference to produce the vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17 seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 seconds for the other methods. As we did not re-train the Skip-thought vector models on this dataset, the training time reported in the table is the time it takes to generate the embeddings for the 25,000 training documents. Due to repeated high-dimensional matrix operations required for encoding long paragraphs, it takes fairly long time to generate the representations for these documents. Similarly for testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu.
Data dependent regularization. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table TABREF24 lists the words having the smallest INLINEFORM0 norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words.
Subsampling frequent words. Note that for all the numbers reported, we applied the trick of subsampling of frequent words introduced in BIBREF6 to counter the imbalance between frequent and rare words. It is critical to the performance of simple Word2Vec+AVG as the sole remedy to diminish the contribution of common words in the final document representation. If we were to remove this step, the error rate of Word2Vec+AVG will increases from INLINEFORM0 to INLINEFORM1 . Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of words that are frequent but uninformative, therefore does not rely on this trick.
Word analogy
In table TABREF24 , we demonstrated that the corruption model introduced in Doc2VecC dampens the embeddings of words which are common and non-discriminative (stop words). In this experiment, we are going to quantatively compare the word embeddings generated by Doc2VecC to the ones generated by Word2Vec, or Paragraph Vectors on the word analogy task introduced by BIBREF1 . The dataset contains five types of semantic questions, and nine types of syntactic questions, with a total of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simple linear algebraic operations on the word embeddings generated by different methods. Please refer to the original paper for more details on the evaluation protocol.
We trained the word embeddings of different methods using the English news dataset released under the ACL workshop on statistical machine translation. The training set includes close to 15M paragraphs with 355M tokens. We compare the performance of word embeddings trained by different methods with increasing embedding dimensionality as well as increasing training data.
We observe similar trends as in BIBREF1 . Increasing embedding dimensionality as well as training data size improves performance of the word embeddings on this task. However, the improvement is diminishing. Doc2VecC produces word embeddings which performs significantly better than the ones generated by Word2Vec. We observe close to INLINEFORM0 uplift when we train on the full training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset. Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors relies mostly on the unique document vectors to capture the information in a text document instead of learning the word semantic or syntactic similarities. This also explains why the PV-DBOW BIBREF2 model architecture proposed in the original work, which completely removes word embedding layers, performs comparable to the distributed memory version.
In table 5, we list a detailed comparison of the performance of word embeddings generated by Word2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding of size 100. We can see that Doc2VecC significantly outperforms the word embeddings produced by Word2Vec across almost all the subtasks.
Document Classification
For the document classification task, we use a subset of the wikipedia dump, which contains over 300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports, entertainment, literature, and politics etc. Examples of categories include American drama films, Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts (the second paragraph) were extracted for each page as a document. For each category, we select 1,000 documents with unique category label, and 100 documents were used for training and 900 documents for testing. The remaining documents are used as unlabeled data. The 100 classes are balanced in the training and testing sets. For this data set, we learn the word embedding and document representation for all the algorithms using all the available data. We apply a cutoff of 10, resulting in a vocabulary of size INLINEFORM0 .
Table TABREF29 summarizes the classification error of a linear SVM trained on representations of different sizes. We can see that most of the algorithms are not sensitive to the size of the vector representation. Doc2Vec benefits most from increasing representation size. Across all sizes of representations, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC can achieve same or better performance with a much smaller representation vector.
Figure FIGREF30 visualizes the document representations learned by Doc2Vec (left) and Doc2VecC (right) using t-SNE BIBREF32 . We can see that documents from the same category are nicely clustered using the representation generated by Doc2VecC. Doc2Vec, on the other hand, does not produce a clear separation between different categories, which explains its worse performance reported in Table TABREF29 .
Figure FIGREF31 visualizes the vector representation generated by Doc2VecC w.r.t. coarser categorization. we manually grouped the 100 categories into 7 coarse categories, television, albums, writers, musicians, athletes, species and actors. Categories that do no belong to any of these 7 groups are not included in the figure. We can see that documents belonging to a coarser category are grouped together. This subset includes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cycling etc., which explains why the athletes category are less concentrated. In the projection, we can see documents belonging to the musician category are closer to those belonging to albums category than those of athletes or species.
Semantic relatedness
We test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset BIBREF33 . Given two sentences, the task is to determine how closely they are semantically related. The set contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5. A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. The set is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.
We compare Doc2VecC with several winning solutions of the competition as well as several more recent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM trained from scratch on this dataset, Skip-thought vectors learned a large book corpus BIBREF34 and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same protocol as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary to the vocabulary expansion technique used in BIBREF16 to handle out-of-vocabulary words, we extend the vocabulary of the learned model directly on the target dataset in the following way: we use the pre-trained word embedding as an initialization, and fine-tune the word and sentence representation on the SICK dataset. Notice that the fine-tuning is done for sentence representation learning only, and we did not use the relatedness score in the learning. This step brings small improvement to the performance of our algorithm. Given the sentence embeddings, we used the exact same training and testing protocol as in BIBREF16 to score each pair of sentences: with two sentence embedding INLINEFORM0 and INLINEFORM1 , we concatenate their component-wise product, INLINEFORM2 and their absolute difference, INLINEFORM3 as the feature representation.
Table TABREF35 summarizes the performance of various algorithms on this dataset. Despite its simplicity, Doc2VecC significantly out-performs the winning solutions of the competition, which are heavily feature engineered toward this dataset and several baseline methods, noticeably the dependency-tree RNNs introduced in BIBREF35 , which relies on expensive dependency parsers to compose sentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than the LSTM based methods or skip-thought vectors on this dataset, while it significantly outperforms skip-thought vectors on the IMDB movie review dataset ( INLINEFORM0 error rate vs INLINEFORM1 ). As we hypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s). We would like to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. It takes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktop with Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors.
Conclusion
We introduce a new model architecture Doc2VecC for document representation learning. It is very efficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes sure document representation generated by averaging word embeddings capture semantics of document during learning. It also introduces a data-dependent regularization which favors informative or rare words while dampening the embeddings of common and non-discriminative words. As such, each document can be efficiently represented as a simple average of the learned word embeddings. In comparison to several existing document representation learning algorithms, Doc2VecC outperforms not only in testing efficiency, but also in the expressiveness of the generated representations. | RNNLM BIBREF11 |
bb5697cf352dd608edf119ca9b82a6b7e51c8d21 | bb5697cf352dd608edf119ca9b82a6b7e51c8d21_0 | Q: Is their approach similar to making an averaged weighted sum of word vectors, where weights reflect word frequencies?
Text: Introduction
Text understanding starts with the challenge of finding machine-understandable representation that captures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably the most commonly used document representations. Despite its simplicity, BoW works surprisingly well for many tasks BIBREF0 . However, by treating words and phrases as unique and discrete symbols, BoW often fails to capture the similarity between words or phrases and also suffers from sparsity and high dimensionality.
Recent works on using neural networks to learn distributed vector representations of words have gained great popularity. The well celebrated Word2Vec BIBREF1 , by learning to predict the target word using its neighboring words, maps words of similar meanings to nearby points in the continuous vector space. The surprisingly simple model has succeeded in generating high-quality word embeddings for tasks such as language modeling, text understanding and machine translation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. It can be trained on billions of words per hour on a single machine.
Paragraph Vectors BIBREF2 generalize the idea to learn vector representation for documents. A target word is predicted by the word embeddings of its neighbors in together with a unique document vector learned for each document. It outperforms established document representations, such as BoW and Latent Dirichlet Allocation BIBREF3 , on various text understanding tasks BIBREF4 . However, two caveats come with this approach: 1) the number of parameters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensive to generate vector representations for unseen documents at test time.
We propose an efficient model architecture, referred to as Document Vector through Corruption (Doc2VecC), to learn vector representations for documents. It is motivated by the observation that linear operations on the word embeddings learned by Word2Vec can sustain substantial amount of syntactic and semantic meanings of a phrase or a sentence BIBREF5 . For example, vec(“Russia”) + vec(“river”) is close to vec(“Volga River”) BIBREF6 , and vec(“king”) - vec(“man”) + vec(“women”) is close to vec(“queen”) BIBREF5 . In Doc2VecC, we represent each document as a simple average of the word embeddings of all the words in the document. In contrast to existing approaches which post-process learned word embeddings to form document representation BIBREF7 , BIBREF8 , Doc2VecC enforces a meaningful document representation can be formed by averaging the word embeddings during learning. Furthermore, we include a corruption model that randomly remove words from a document during learning, a mechanism that is critical to the performance and learning speed of our algorithm.
Doc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupled from the size of the training corpus, depending only on the size of the vocabulary; 2. The model architecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. The new framework implicitly introduces a data-dependent regularization, which favors rare or informative words and suppresses words that are common but not discriminative; 4. Vector representation of a document can be generated by simply averaging the learned word embeddings of all the words in the document, which significantly boost test efficiency; 5. The vector representation generated by Doc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification as well as semantic relatedness tasks.
Related Works and Notations
Text representation learning has been extensively studied. Popular representations range from the simplest BoW and its term-frequency based variants BIBREF9 , language model based methods BIBREF10 , BIBREF11 , BIBREF12 , topic models BIBREF13 , BIBREF3 , Denoising Autoencoders and its variants BIBREF14 , BIBREF15 , and distributed vector representations BIBREF8 , BIBREF2 , BIBREF16 . Another prominent line of work includes learning task-specific document representation with deep neural networks, such as CNN BIBREF17 or LSTM based approaches BIBREF18 , BIBREF19 .
In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that are most similar to ours. There are two well-know model architectures used for both methods, referred to as Continuous Bag-of-Words (CBoW) and Skipgram models BIBREF1 . In this work, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we are going to use throughout the paper:
Method
Several works BIBREF6 , BIBREF5 showcased that syntactic and semantic regularities of phrases and sentences are reasonably well preserved by adding or subtracting word embeddings learned through Word2Vec. It prompts us to explore the option of simply representing a document as an average of word embeddings. Figure FIGREF9 illustrates the new model architecture.
Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layer as well as an output layer to predict the target word, “ceremony” in this example. The embeddings of neighboring words (“opening”, “for”, “the”) provide local context while the vector representation of the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors, which directly learns a unique vector for each document, Doc2VecC represents each document as an average of the embeddings of words randomly sampled from the document (“performance” at position INLINEFORM0 , “praised” at position INLINEFORM1 , and “brazil” at position INLINEFORM2 ). BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization, which brings great performance improvement.
Here we describe the stochastic process we used to generate a global context at each update. The global context, which we denote as INLINEFORM0 , is generated through a unbiased mask-out/drop-out corruption, in which we randomly overwrites each dimension of the original document INLINEFORM1 with probability INLINEFORM2 . To make the corruption unbiased, we set the uncorrupted dimensions to INLINEFORM3 times its original value. Formally, DISPLAYFORM0
Doc2VecC then defines the probability of observing a target word INLINEFORM0 given its local context INLINEFORM1 as well as the global context INLINEFORM2 as DISPLAYFORM0
Here INLINEFORM0 is the length of the document. Exactly computing the probability is impractical, instead we approximate it with negative sampling BIBREF1 . DISPLAYFORM0
here INLINEFORM0 stands for a uniform distribution over the terms in the vocabulary. The two projection matrices INLINEFORM1 and INLINEFORM2 are then learned to minimize the loss: DISPLAYFORM0
Given the learned projection matrix INLINEFORM0 , we then represent each document simply as an average of the embeddings of the words in the document, DISPLAYFORM0
We are going to elaborate next why we choose to corrupt the original document with the corruption model in eq.( EQREF10 ) during learning, and how it enables us to simply use the average word embeddings as the vector representation for documents at test time.
Corruption as data-dependent regularization
We approximate the log likelihood for each instance INLINEFORM0 in eq.( EQREF13 ) with its Taylor expansion with respect to INLINEFORM1 up to the second-order BIBREF26 , BIBREF27 , BIBREF28 . Concretely, we choose to expand at the mean of the corruption INLINEFORM2 : INLINEFORM3
where INLINEFORM0 and INLINEFORM1 are the first-order (i.e., gradient) and second-order (i.e., Hessian) of the log likelihood with respect to INLINEFORM2 . Expansion at the mean INLINEFORM3 is crucial as shown in the following steps. Let us assume that for each instance, we are going to sample the global context INLINEFORM4 infinitely many times, and thus compute the expected log likelihood with respect to the corrupted INLINEFORM5 . INLINEFORM6
The linear term disappears as INLINEFORM0 . We substitute in INLINEFORM1 for the mean INLINEFORM2 of the corrupting distribution (unbiased corruption) and the matrix INLINEFORM3 for the variance, and obtain DISPLAYFORM0
As each word in a document is corrupted independently of others, the variance matrix INLINEFORM0 is simplified to a diagonal matrix with INLINEFORM1 element equals INLINEFORM2 . As a result, we only need to compute the diagonal terms of the Hessian matrix INLINEFORM3 .
The INLINEFORM0 dimension of the Hessian's diagonal evaluated at the mean INLINEFORM1 is given by INLINEFORM2
Plug the Hessian matrix and the variance matrix back into eq.( EQREF16 ), and then back to the loss defined in eq.( EQREF13 ), we can see that Doc2VecC intrinsically minimizes DISPLAYFORM0
Each INLINEFORM0 in the first term measures the log likelihood of observing the target word INLINEFORM1 given its local context INLINEFORM2 and the document vector INLINEFORM3 . As such, Doc2VecC enforces that a document vector generated by averaging word embeddings can capture the global semantics of the document, and fill in information missed in the local context. The second term here is a data-dependent regularization. The regularization on the embedding INLINEFORM4 of each word INLINEFORM5 takes the following form, INLINEFORM6
where INLINEFORM0 prescribes the confidence of predicting the target word INLINEFORM1 given its neighboring context INLINEFORM2 as well as the document vector INLINEFORM3 .
Closely examining INLINEFORM0 leads to several interesting findings: 1. the regularizer penalizes more on the embeddings of common words. A word INLINEFORM1 that frequently appears across the training corpus, i.e, INLINEFORM2 often, will have a bigger regularization than a rare word; 2. on the other hand, the regularization is modulated by INLINEFORM3 , which is small if INLINEFORM4 . In other words, if INLINEFORM5 is critical to a confident prediction INLINEFORM6 when it is active, then the regularization is diminished. Similar effect was observed for dropout training for logistic regression model BIBREF27 and denoising autoencoders BIBREF28 .
Experiments
We evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semantic relatedness task, along with several document representation learning algorithms. All experiments can be reproduced using the code available at https://github.com/mchen24/iclr2017
Baselines
We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation learned from reconstructing original document INLINEFORM0 using corrupted one INLINEFORM1 . SDAs have been shown to be the state-of-the-art for sentiment analysis tasks BIBREF29 . We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of INLINEFORM2 in the reconstruction error and employed negative sampling for the remainings; Word2Vec BIBREF1 +IDF, a representation generated through weighted average of word vectors learned using Word2Vec; Doc2Vec BIBREF2 ; Skip-thought Vectors BIBREF16 , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to LSTM-based methods BIBREF18 that have been reported on this dataset.
Sentiment analysis
For sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviews categorized as either positive or negative. It comes with predefined train/test split BIBREF30 : 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. The two classes are balanced in the training and testing sets. We remove words that appear less than 10 times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.
Setup. We test the various representation learning algorithms under two settings: one follows the same protocol proposed in BIBREF8 , where representation is learned using all the available data, including the test set; another one where the representation is learned using training and unlabeled set only. For both settings, a linear support vector machine (SVM) BIBREF31 is trained afterwards on the learned representation for classification. For Skip-thought Vectors, we used the generic model trained on a much bigger book corpus to encode the documents. A vector of 4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, are generated for each document. In comparison, all the other algorithms produce a vector representation of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parameters are tuned on a validation set subsampled from the training set.
Accuracy. Comparing the two columns in Table TABREF20 , we can see that all the representation learning algorithms benefits from including the testing data during the representation learning phrase. Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methods outperforms the other baselines, beating the BOW representation by 15%. In comparison with Word2Vec+IDF, which applies post-processing on learned word embeddings to form document representation, Doc2VecC naturally enforces document semantics to be captured by averaged word embeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Autoencoders (DEA) if the local context words are removed from the paradigm shown in Figure FIGREF9 . By including the context words, Doc2VecC allows the document vector to focus more on capturing the global context. Skip-thought vectors perform surprisingly poor on this dataset comparing to other methods. We hypothesized that it is due to the length of paragraphs in this dataset. The average length of paragraphs in the IMDB movie review dataset is INLINEFORM0 , much longer than the ones used for training and testing in the original paper, which is in the order of 10. As noted in BIBREF18 , the performance of LSTM based method (similarly, the gated RNN used in Skip-thought vectors) drops significantly with increasing paragraph length, as it is hard to preserve state over long sequences of words.
Time. Table TABREF22 summarizes the time required by these algorithms to learn and generate the document representation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC second that. The number of parameters that needs to be back-propagated in each update was increased by the number of surviving words in INLINEFORM0 . We found that both models are not sensitive to the corruption rate INLINEFORM1 in the noise model. Since the learning time decreases with higher corruption rate, we used INLINEFORM2 throughout the experiments. Paragraph Vectors takes longer time to train as there are more parameters (linear to the number of document in the learning set) to learn. At test time, Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document representation. Paragraph Vectors, on the other hand, requires another round of inference to produce the vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17 seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 seconds for the other methods. As we did not re-train the Skip-thought vector models on this dataset, the training time reported in the table is the time it takes to generate the embeddings for the 25,000 training documents. Due to repeated high-dimensional matrix operations required for encoding long paragraphs, it takes fairly long time to generate the representations for these documents. Similarly for testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu.
Data dependent regularization. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table TABREF24 lists the words having the smallest INLINEFORM0 norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words.
Subsampling frequent words. Note that for all the numbers reported, we applied the trick of subsampling of frequent words introduced in BIBREF6 to counter the imbalance between frequent and rare words. It is critical to the performance of simple Word2Vec+AVG as the sole remedy to diminish the contribution of common words in the final document representation. If we were to remove this step, the error rate of Word2Vec+AVG will increases from INLINEFORM0 to INLINEFORM1 . Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of words that are frequent but uninformative, therefore does not rely on this trick.
Word analogy
In table TABREF24 , we demonstrated that the corruption model introduced in Doc2VecC dampens the embeddings of words which are common and non-discriminative (stop words). In this experiment, we are going to quantatively compare the word embeddings generated by Doc2VecC to the ones generated by Word2Vec, or Paragraph Vectors on the word analogy task introduced by BIBREF1 . The dataset contains five types of semantic questions, and nine types of syntactic questions, with a total of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simple linear algebraic operations on the word embeddings generated by different methods. Please refer to the original paper for more details on the evaluation protocol.
We trained the word embeddings of different methods using the English news dataset released under the ACL workshop on statistical machine translation. The training set includes close to 15M paragraphs with 355M tokens. We compare the performance of word embeddings trained by different methods with increasing embedding dimensionality as well as increasing training data.
We observe similar trends as in BIBREF1 . Increasing embedding dimensionality as well as training data size improves performance of the word embeddings on this task. However, the improvement is diminishing. Doc2VecC produces word embeddings which performs significantly better than the ones generated by Word2Vec. We observe close to INLINEFORM0 uplift when we train on the full training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset. Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors relies mostly on the unique document vectors to capture the information in a text document instead of learning the word semantic or syntactic similarities. This also explains why the PV-DBOW BIBREF2 model architecture proposed in the original work, which completely removes word embedding layers, performs comparable to the distributed memory version.
In table 5, we list a detailed comparison of the performance of word embeddings generated by Word2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding of size 100. We can see that Doc2VecC significantly outperforms the word embeddings produced by Word2Vec across almost all the subtasks.
Document Classification
For the document classification task, we use a subset of the wikipedia dump, which contains over 300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports, entertainment, literature, and politics etc. Examples of categories include American drama films, Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts (the second paragraph) were extracted for each page as a document. For each category, we select 1,000 documents with unique category label, and 100 documents were used for training and 900 documents for testing. The remaining documents are used as unlabeled data. The 100 classes are balanced in the training and testing sets. For this data set, we learn the word embedding and document representation for all the algorithms using all the available data. We apply a cutoff of 10, resulting in a vocabulary of size INLINEFORM0 .
Table TABREF29 summarizes the classification error of a linear SVM trained on representations of different sizes. We can see that most of the algorithms are not sensitive to the size of the vector representation. Doc2Vec benefits most from increasing representation size. Across all sizes of representations, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC can achieve same or better performance with a much smaller representation vector.
Figure FIGREF30 visualizes the document representations learned by Doc2Vec (left) and Doc2VecC (right) using t-SNE BIBREF32 . We can see that documents from the same category are nicely clustered using the representation generated by Doc2VecC. Doc2Vec, on the other hand, does not produce a clear separation between different categories, which explains its worse performance reported in Table TABREF29 .
Figure FIGREF31 visualizes the vector representation generated by Doc2VecC w.r.t. coarser categorization. we manually grouped the 100 categories into 7 coarse categories, television, albums, writers, musicians, athletes, species and actors. Categories that do no belong to any of these 7 groups are not included in the figure. We can see that documents belonging to a coarser category are grouped together. This subset includes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cycling etc., which explains why the athletes category are less concentrated. In the projection, we can see documents belonging to the musician category are closer to those belonging to albums category than those of athletes or species.
Semantic relatedness
We test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset BIBREF33 . Given two sentences, the task is to determine how closely they are semantically related. The set contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5. A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. The set is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.
We compare Doc2VecC with several winning solutions of the competition as well as several more recent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM trained from scratch on this dataset, Skip-thought vectors learned a large book corpus BIBREF34 and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same protocol as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary to the vocabulary expansion technique used in BIBREF16 to handle out-of-vocabulary words, we extend the vocabulary of the learned model directly on the target dataset in the following way: we use the pre-trained word embedding as an initialization, and fine-tune the word and sentence representation on the SICK dataset. Notice that the fine-tuning is done for sentence representation learning only, and we did not use the relatedness score in the learning. This step brings small improvement to the performance of our algorithm. Given the sentence embeddings, we used the exact same training and testing protocol as in BIBREF16 to score each pair of sentences: with two sentence embedding INLINEFORM0 and INLINEFORM1 , we concatenate their component-wise product, INLINEFORM2 and their absolute difference, INLINEFORM3 as the feature representation.
Table TABREF35 summarizes the performance of various algorithms on this dataset. Despite its simplicity, Doc2VecC significantly out-performs the winning solutions of the competition, which are heavily feature engineered toward this dataset and several baseline methods, noticeably the dependency-tree RNNs introduced in BIBREF35 , which relies on expensive dependency parsers to compose sentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than the LSTM based methods or skip-thought vectors on this dataset, while it significantly outperforms skip-thought vectors on the IMDB movie review dataset ( INLINEFORM0 error rate vs INLINEFORM1 ). As we hypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s). We would like to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. It takes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktop with Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors.
Conclusion
We introduce a new model architecture Doc2VecC for document representation learning. It is very efficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes sure document representation generated by averaging word embeddings capture semantics of document during learning. It also introduces a data-dependent regularization which favors informative or rare words while dampening the embeddings of common and non-discriminative words. As such, each document can be efficiently represented as a simple average of the learned word embeddings. In comparison to several existing document representation learning algorithms, Doc2VecC outperforms not only in testing efficiency, but also in the expressiveness of the generated representations. | Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. |
98785bf06e60fcf0a6fe8921edab6190d0c2cec1 | 98785bf06e60fcf0a6fe8921edab6190d0c2cec1_0 | Q: How do they determine which words are informative?
Text: Introduction
Text understanding starts with the challenge of finding machine-understandable representation that captures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably the most commonly used document representations. Despite its simplicity, BoW works surprisingly well for many tasks BIBREF0 . However, by treating words and phrases as unique and discrete symbols, BoW often fails to capture the similarity between words or phrases and also suffers from sparsity and high dimensionality.
Recent works on using neural networks to learn distributed vector representations of words have gained great popularity. The well celebrated Word2Vec BIBREF1 , by learning to predict the target word using its neighboring words, maps words of similar meanings to nearby points in the continuous vector space. The surprisingly simple model has succeeded in generating high-quality word embeddings for tasks such as language modeling, text understanding and machine translation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. It can be trained on billions of words per hour on a single machine.
Paragraph Vectors BIBREF2 generalize the idea to learn vector representation for documents. A target word is predicted by the word embeddings of its neighbors in together with a unique document vector learned for each document. It outperforms established document representations, such as BoW and Latent Dirichlet Allocation BIBREF3 , on various text understanding tasks BIBREF4 . However, two caveats come with this approach: 1) the number of parameters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensive to generate vector representations for unseen documents at test time.
We propose an efficient model architecture, referred to as Document Vector through Corruption (Doc2VecC), to learn vector representations for documents. It is motivated by the observation that linear operations on the word embeddings learned by Word2Vec can sustain substantial amount of syntactic and semantic meanings of a phrase or a sentence BIBREF5 . For example, vec(“Russia”) + vec(“river”) is close to vec(“Volga River”) BIBREF6 , and vec(“king”) - vec(“man”) + vec(“women”) is close to vec(“queen”) BIBREF5 . In Doc2VecC, we represent each document as a simple average of the word embeddings of all the words in the document. In contrast to existing approaches which post-process learned word embeddings to form document representation BIBREF7 , BIBREF8 , Doc2VecC enforces a meaningful document representation can be formed by averaging the word embeddings during learning. Furthermore, we include a corruption model that randomly remove words from a document during learning, a mechanism that is critical to the performance and learning speed of our algorithm.
Doc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupled from the size of the training corpus, depending only on the size of the vocabulary; 2. The model architecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. The new framework implicitly introduces a data-dependent regularization, which favors rare or informative words and suppresses words that are common but not discriminative; 4. Vector representation of a document can be generated by simply averaging the learned word embeddings of all the words in the document, which significantly boost test efficiency; 5. The vector representation generated by Doc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification as well as semantic relatedness tasks.
Related Works and Notations
Text representation learning has been extensively studied. Popular representations range from the simplest BoW and its term-frequency based variants BIBREF9 , language model based methods BIBREF10 , BIBREF11 , BIBREF12 , topic models BIBREF13 , BIBREF3 , Denoising Autoencoders and its variants BIBREF14 , BIBREF15 , and distributed vector representations BIBREF8 , BIBREF2 , BIBREF16 . Another prominent line of work includes learning task-specific document representation with deep neural networks, such as CNN BIBREF17 or LSTM based approaches BIBREF18 , BIBREF19 .
In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that are most similar to ours. There are two well-know model architectures used for both methods, referred to as Continuous Bag-of-Words (CBoW) and Skipgram models BIBREF1 . In this work, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we are going to use throughout the paper:
Method
Several works BIBREF6 , BIBREF5 showcased that syntactic and semantic regularities of phrases and sentences are reasonably well preserved by adding or subtracting word embeddings learned through Word2Vec. It prompts us to explore the option of simply representing a document as an average of word embeddings. Figure FIGREF9 illustrates the new model architecture.
Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layer as well as an output layer to predict the target word, “ceremony” in this example. The embeddings of neighboring words (“opening”, “for”, “the”) provide local context while the vector representation of the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors, which directly learns a unique vector for each document, Doc2VecC represents each document as an average of the embeddings of words randomly sampled from the document (“performance” at position INLINEFORM0 , “praised” at position INLINEFORM1 , and “brazil” at position INLINEFORM2 ). BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization, which brings great performance improvement.
Here we describe the stochastic process we used to generate a global context at each update. The global context, which we denote as INLINEFORM0 , is generated through a unbiased mask-out/drop-out corruption, in which we randomly overwrites each dimension of the original document INLINEFORM1 with probability INLINEFORM2 . To make the corruption unbiased, we set the uncorrupted dimensions to INLINEFORM3 times its original value. Formally, DISPLAYFORM0
Doc2VecC then defines the probability of observing a target word INLINEFORM0 given its local context INLINEFORM1 as well as the global context INLINEFORM2 as DISPLAYFORM0
Here INLINEFORM0 is the length of the document. Exactly computing the probability is impractical, instead we approximate it with negative sampling BIBREF1 . DISPLAYFORM0
here INLINEFORM0 stands for a uniform distribution over the terms in the vocabulary. The two projection matrices INLINEFORM1 and INLINEFORM2 are then learned to minimize the loss: DISPLAYFORM0
Given the learned projection matrix INLINEFORM0 , we then represent each document simply as an average of the embeddings of the words in the document, DISPLAYFORM0
We are going to elaborate next why we choose to corrupt the original document with the corruption model in eq.( EQREF10 ) during learning, and how it enables us to simply use the average word embeddings as the vector representation for documents at test time.
Corruption as data-dependent regularization
We approximate the log likelihood for each instance INLINEFORM0 in eq.( EQREF13 ) with its Taylor expansion with respect to INLINEFORM1 up to the second-order BIBREF26 , BIBREF27 , BIBREF28 . Concretely, we choose to expand at the mean of the corruption INLINEFORM2 : INLINEFORM3
where INLINEFORM0 and INLINEFORM1 are the first-order (i.e., gradient) and second-order (i.e., Hessian) of the log likelihood with respect to INLINEFORM2 . Expansion at the mean INLINEFORM3 is crucial as shown in the following steps. Let us assume that for each instance, we are going to sample the global context INLINEFORM4 infinitely many times, and thus compute the expected log likelihood with respect to the corrupted INLINEFORM5 . INLINEFORM6
The linear term disappears as INLINEFORM0 . We substitute in INLINEFORM1 for the mean INLINEFORM2 of the corrupting distribution (unbiased corruption) and the matrix INLINEFORM3 for the variance, and obtain DISPLAYFORM0
As each word in a document is corrupted independently of others, the variance matrix INLINEFORM0 is simplified to a diagonal matrix with INLINEFORM1 element equals INLINEFORM2 . As a result, we only need to compute the diagonal terms of the Hessian matrix INLINEFORM3 .
The INLINEFORM0 dimension of the Hessian's diagonal evaluated at the mean INLINEFORM1 is given by INLINEFORM2
Plug the Hessian matrix and the variance matrix back into eq.( EQREF16 ), and then back to the loss defined in eq.( EQREF13 ), we can see that Doc2VecC intrinsically minimizes DISPLAYFORM0
Each INLINEFORM0 in the first term measures the log likelihood of observing the target word INLINEFORM1 given its local context INLINEFORM2 and the document vector INLINEFORM3 . As such, Doc2VecC enforces that a document vector generated by averaging word embeddings can capture the global semantics of the document, and fill in information missed in the local context. The second term here is a data-dependent regularization. The regularization on the embedding INLINEFORM4 of each word INLINEFORM5 takes the following form, INLINEFORM6
where INLINEFORM0 prescribes the confidence of predicting the target word INLINEFORM1 given its neighboring context INLINEFORM2 as well as the document vector INLINEFORM3 .
Closely examining INLINEFORM0 leads to several interesting findings: 1. the regularizer penalizes more on the embeddings of common words. A word INLINEFORM1 that frequently appears across the training corpus, i.e, INLINEFORM2 often, will have a bigger regularization than a rare word; 2. on the other hand, the regularization is modulated by INLINEFORM3 , which is small if INLINEFORM4 . In other words, if INLINEFORM5 is critical to a confident prediction INLINEFORM6 when it is active, then the regularization is diminished. Similar effect was observed for dropout training for logistic regression model BIBREF27 and denoising autoencoders BIBREF28 .
Experiments
We evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semantic relatedness task, along with several document representation learning algorithms. All experiments can be reproduced using the code available at https://github.com/mchen24/iclr2017
Baselines
We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation learned from reconstructing original document INLINEFORM0 using corrupted one INLINEFORM1 . SDAs have been shown to be the state-of-the-art for sentiment analysis tasks BIBREF29 . We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of INLINEFORM2 in the reconstruction error and employed negative sampling for the remainings; Word2Vec BIBREF1 +IDF, a representation generated through weighted average of word vectors learned using Word2Vec; Doc2Vec BIBREF2 ; Skip-thought Vectors BIBREF16 , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to LSTM-based methods BIBREF18 that have been reported on this dataset.
Sentiment analysis
For sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviews categorized as either positive or negative. It comes with predefined train/test split BIBREF30 : 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. The two classes are balanced in the training and testing sets. We remove words that appear less than 10 times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.
Setup. We test the various representation learning algorithms under two settings: one follows the same protocol proposed in BIBREF8 , where representation is learned using all the available data, including the test set; another one where the representation is learned using training and unlabeled set only. For both settings, a linear support vector machine (SVM) BIBREF31 is trained afterwards on the learned representation for classification. For Skip-thought Vectors, we used the generic model trained on a much bigger book corpus to encode the documents. A vector of 4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, are generated for each document. In comparison, all the other algorithms produce a vector representation of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parameters are tuned on a validation set subsampled from the training set.
Accuracy. Comparing the two columns in Table TABREF20 , we can see that all the representation learning algorithms benefits from including the testing data during the representation learning phrase. Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methods outperforms the other baselines, beating the BOW representation by 15%. In comparison with Word2Vec+IDF, which applies post-processing on learned word embeddings to form document representation, Doc2VecC naturally enforces document semantics to be captured by averaged word embeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Autoencoders (DEA) if the local context words are removed from the paradigm shown in Figure FIGREF9 . By including the context words, Doc2VecC allows the document vector to focus more on capturing the global context. Skip-thought vectors perform surprisingly poor on this dataset comparing to other methods. We hypothesized that it is due to the length of paragraphs in this dataset. The average length of paragraphs in the IMDB movie review dataset is INLINEFORM0 , much longer than the ones used for training and testing in the original paper, which is in the order of 10. As noted in BIBREF18 , the performance of LSTM based method (similarly, the gated RNN used in Skip-thought vectors) drops significantly with increasing paragraph length, as it is hard to preserve state over long sequences of words.
Time. Table TABREF22 summarizes the time required by these algorithms to learn and generate the document representation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC second that. The number of parameters that needs to be back-propagated in each update was increased by the number of surviving words in INLINEFORM0 . We found that both models are not sensitive to the corruption rate INLINEFORM1 in the noise model. Since the learning time decreases with higher corruption rate, we used INLINEFORM2 throughout the experiments. Paragraph Vectors takes longer time to train as there are more parameters (linear to the number of document in the learning set) to learn. At test time, Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document representation. Paragraph Vectors, on the other hand, requires another round of inference to produce the vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17 seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 seconds for the other methods. As we did not re-train the Skip-thought vector models on this dataset, the training time reported in the table is the time it takes to generate the embeddings for the 25,000 training documents. Due to repeated high-dimensional matrix operations required for encoding long paragraphs, it takes fairly long time to generate the representations for these documents. Similarly for testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu.
Data dependent regularization. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table TABREF24 lists the words having the smallest INLINEFORM0 norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words.
Subsampling frequent words. Note that for all the numbers reported, we applied the trick of subsampling of frequent words introduced in BIBREF6 to counter the imbalance between frequent and rare words. It is critical to the performance of simple Word2Vec+AVG as the sole remedy to diminish the contribution of common words in the final document representation. If we were to remove this step, the error rate of Word2Vec+AVG will increases from INLINEFORM0 to INLINEFORM1 . Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of words that are frequent but uninformative, therefore does not rely on this trick.
Word analogy
In table TABREF24 , we demonstrated that the corruption model introduced in Doc2VecC dampens the embeddings of words which are common and non-discriminative (stop words). In this experiment, we are going to quantatively compare the word embeddings generated by Doc2VecC to the ones generated by Word2Vec, or Paragraph Vectors on the word analogy task introduced by BIBREF1 . The dataset contains five types of semantic questions, and nine types of syntactic questions, with a total of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simple linear algebraic operations on the word embeddings generated by different methods. Please refer to the original paper for more details on the evaluation protocol.
We trained the word embeddings of different methods using the English news dataset released under the ACL workshop on statistical machine translation. The training set includes close to 15M paragraphs with 355M tokens. We compare the performance of word embeddings trained by different methods with increasing embedding dimensionality as well as increasing training data.
We observe similar trends as in BIBREF1 . Increasing embedding dimensionality as well as training data size improves performance of the word embeddings on this task. However, the improvement is diminishing. Doc2VecC produces word embeddings which performs significantly better than the ones generated by Word2Vec. We observe close to INLINEFORM0 uplift when we train on the full training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset. Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors relies mostly on the unique document vectors to capture the information in a text document instead of learning the word semantic or syntactic similarities. This also explains why the PV-DBOW BIBREF2 model architecture proposed in the original work, which completely removes word embedding layers, performs comparable to the distributed memory version.
In table 5, we list a detailed comparison of the performance of word embeddings generated by Word2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding of size 100. We can see that Doc2VecC significantly outperforms the word embeddings produced by Word2Vec across almost all the subtasks.
Document Classification
For the document classification task, we use a subset of the wikipedia dump, which contains over 300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports, entertainment, literature, and politics etc. Examples of categories include American drama films, Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts (the second paragraph) were extracted for each page as a document. For each category, we select 1,000 documents with unique category label, and 100 documents were used for training and 900 documents for testing. The remaining documents are used as unlabeled data. The 100 classes are balanced in the training and testing sets. For this data set, we learn the word embedding and document representation for all the algorithms using all the available data. We apply a cutoff of 10, resulting in a vocabulary of size INLINEFORM0 .
Table TABREF29 summarizes the classification error of a linear SVM trained on representations of different sizes. We can see that most of the algorithms are not sensitive to the size of the vector representation. Doc2Vec benefits most from increasing representation size. Across all sizes of representations, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC can achieve same or better performance with a much smaller representation vector.
Figure FIGREF30 visualizes the document representations learned by Doc2Vec (left) and Doc2VecC (right) using t-SNE BIBREF32 . We can see that documents from the same category are nicely clustered using the representation generated by Doc2VecC. Doc2Vec, on the other hand, does not produce a clear separation between different categories, which explains its worse performance reported in Table TABREF29 .
Figure FIGREF31 visualizes the vector representation generated by Doc2VecC w.r.t. coarser categorization. we manually grouped the 100 categories into 7 coarse categories, television, albums, writers, musicians, athletes, species and actors. Categories that do no belong to any of these 7 groups are not included in the figure. We can see that documents belonging to a coarser category are grouped together. This subset includes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cycling etc., which explains why the athletes category are less concentrated. In the projection, we can see documents belonging to the musician category are closer to those belonging to albums category than those of athletes or species.
Semantic relatedness
We test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset BIBREF33 . Given two sentences, the task is to determine how closely they are semantically related. The set contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5. A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. The set is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.
We compare Doc2VecC with several winning solutions of the competition as well as several more recent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM trained from scratch on this dataset, Skip-thought vectors learned a large book corpus BIBREF34 and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same protocol as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary to the vocabulary expansion technique used in BIBREF16 to handle out-of-vocabulary words, we extend the vocabulary of the learned model directly on the target dataset in the following way: we use the pre-trained word embedding as an initialization, and fine-tune the word and sentence representation on the SICK dataset. Notice that the fine-tuning is done for sentence representation learning only, and we did not use the relatedness score in the learning. This step brings small improvement to the performance of our algorithm. Given the sentence embeddings, we used the exact same training and testing protocol as in BIBREF16 to score each pair of sentences: with two sentence embedding INLINEFORM0 and INLINEFORM1 , we concatenate their component-wise product, INLINEFORM2 and their absolute difference, INLINEFORM3 as the feature representation.
Table TABREF35 summarizes the performance of various algorithms on this dataset. Despite its simplicity, Doc2VecC significantly out-performs the winning solutions of the competition, which are heavily feature engineered toward this dataset and several baseline methods, noticeably the dependency-tree RNNs introduced in BIBREF35 , which relies on expensive dependency parsers to compose sentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than the LSTM based methods or skip-thought vectors on this dataset, while it significantly outperforms skip-thought vectors on the IMDB movie review dataset ( INLINEFORM0 error rate vs INLINEFORM1 ). As we hypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s). We would like to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. It takes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktop with Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors.
Conclusion
We introduce a new model architecture Doc2VecC for document representation learning. It is very efficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes sure document representation generated by averaging word embeddings capture semantics of document during learning. It also introduces a data-dependent regularization which favors informative or rare words while dampening the embeddings of common and non-discriminative words. As such, each document can be efficiently represented as a simple average of the learned word embeddings. In comparison to several existing document representation learning algorithms, Doc2VecC outperforms not only in testing efficiency, but also in the expressiveness of the generated representations. | Informative are those that will not be suppressed by regularization performed. |
9846f84747b89f5c692665c4ea7111671ad9839a | 9846f84747b89f5c692665c4ea7111671ad9839a_0 | Q: What is their best performance on the largest language direction dataset?
Text: Introduction
We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\leftrightarrow $English, German$\leftrightarrow $French, Chinese$\leftrightarrow $English, English$\rightarrow $Lithuanian, English$\rightarrow $Finnish, and Russian$\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\rightarrow $English, Finnish$\rightarrow $English, and English$\rightarrow $Kazakh.
Our basic systems are based on Transformer, back translation and knowledge distillation. We experimented with several techniques we proposed recently. In brief, the innovations we introduced are:
Introduction ::: Multi-agent dual learning (MADL)
The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\mathcal {X}$ to domain $\mathcal {Y}$) and dual task (mapping from domain $\mathcal {Y}$ to $\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\leftrightarrow $English and German$\leftrightarrow $French translations.
Introduction ::: Masked sequence-to-sequence pretraining (MASS)
Pre-training and fine-tuning have achieved great success in language understanding. MASS BIBREF3, a pre-training method designed for language generation, adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. It was integrated into our submitted systems for Chinese$\rightarrow $English and English$\rightarrow $Lithuanian translations.
Introduction ::: Neural architecture optimization (NAO)
As well known, the evolution of neural network architecture plays a key role in advancing neural machine translation. Neural architecture optimization (NAO), our newly proposed method BIBREF4, leverages the power of a gradient-based method to conduct optimization and guide the creation of better neural architecture in a continuous and more compact space given the historically observed architectures and their performances. It was applied in English$\leftrightarrow $Finnish translations in our submitted systems.
Introduction ::: Soft contextual data augmentation (SCA)
While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is relatively limited. SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary. It was applied in Russian$\rightarrow $English translation in our submitted systems.
Our Techniques ::: Multi-agent dual learning (MADL)
MADL is an enhanced version of dual learning BIBREF1, BIBREF6. It leverages $N$ primal translation models $f_i$ and $N$ dual translation models $g_j$ for training, and eventually outputs one $f_0$ and one $g_0$ for inference, where $f_i:\mathcal {X}\mapsto \mathcal {Y},g_j:\mathcal {Y}\mapsto \mathcal {X}$, $i,j\in \lbrace 0,1,\cdots ,N-1\rbrace $. All these models are pre-trained on bilingual data . The $i$-th primal model $f_i$ has a non-negative weight $\alpha _i$ and the $j$-th dual model $g_i$ has a non-negative weight $\beta _j$. All the $\alpha _\cdot $'s and $\beta _\cdot $'s are hyper-parameters. Let $F_\alpha $ denote a combined translation model from $\mathcal {X}$ to $\mathcal {Y}$, and $G_\beta $ a combined translation model from $\mathcal {Y}$ to $\mathcal {X}$,
$F_\alpha $ and $G_\beta $ work as follows: for any $x\in \mathcal {X}$ and $y\in \mathcal {Y}$,
Let $\mathcal {B}$ denote the bilingual dataset. Let $\mathcal {M}_x$ and $\mathcal {M}_y$ denote the monolingual data of $\mathcal {X}$ and $\mathcal {Y}$. The training objective function of MADL can be written as follows:
Note that $f_{>0}$ and $g_{>0}$ will not be optimized during training and we eventually output $f_0$ and $g_0$ for translation. More details can be found in BIBREF0.
Our Techniques ::: Masked sequence-to-sequence pre-training (MASS)
MASS is a pre-training method for language generation. For machine translation, it can leverage monolingual data in two languages to pre-train a translation model. Given a sentence $x \in \mathcal {X}$, we denote $x^{\setminus u:v}$ as a modified version of $x$ where its fragment from position $u$ to $v$ are masked, $0<u<v<m$ and $m$ is the number of tokens of sentence $x$. We denote $k=v-u+1$ as the number of tokens being masked from position $u$ to $v$. We replace each masked token by a special symbol $[\mathbb {M}]$, and the length of the masked sentence is not changed. $x^{u:v}$ denotes the sentence fragment of $x$ from $u$ to $v$.
MASS pre-trains a sequence to sequence model by predicting the sentence fragment $x^{u:v}$ taking the masked sequence $x^{\setminus u:v}$ as input. We use the log likelihood as the objective function:
where $\mathcal {X}$, $\mathcal {Y}$ denote the source and target domain. In addition to zero/low-resource setting BIBREF7, we also extend MASS to supervised setting where bilingual sentence pair $(x, y) \in (\mathcal {X}, \mathcal {Y})$ can be leveraged for pre-training. The log likelihood in the supervised setting is as follows:
where $[\cdot ;\cdot ]$ represents the concatenation operation. $P(y|x^{\setminus u:v};\theta )$ and $P(x|y^{\setminus u:v};\theta )$ denote the probability of translating a masked sequence to another language, which encourage the encoder to extract meaningful representations of unmasked input tokens in order to predict the masked output sequence. $P(x^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ and $P(y^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ denote the probability of generating the masked source/target segment given both the masked source and target sequences, which encourage the model to extract cross-lingual information. $P(y^{u:v}|x^{\setminus u:v};\theta )$ and $P(x^{u:v}|y^{\setminus u:v};\theta )$ denote the probability of generating the masked fragment given only the masked sequence in another language. More details about MASS can be found in BIBREF3.
Our Techniques ::: Neural architecture optimization (NAO)
NAO BIBREF4 is a gradient based neural architecture search (NAS) method. It contains three key components: an encoder, an accuracy predictor, and a decoder, and optimizes a network architecture as follows. (1) The encoder maps a network architecture $x$ to an embedding vector $e_x$ in a continuous space $\mathcal {E}$. (2) The predictor, a function $f$, takes $e_x\in \mathcal {E}$ as input and predicts the dev set accuracy of the architecture $x$. We perform a gradient ascent step, i.e., moving $e_x$ along the direction specified via the gradient $\frac{\partial f}{\partial e_x}$, and get a new embedding vector $e_{x^{\prime }}$:
where $\eta $ is the step size. (3) The decoder is used to map $e_{x^{\prime }}$ back to the corresponding architecture $x^{\prime }$. The new architecture $x^{\prime }$ is assumed to have better performance compared with the original one $x$ due to the property of gradient ascent. NAO repeats the above three steps, and sequentially generates better and better architectures.
To learn high-quality encoder, decoder and performance prediction function, it is essential to have a large quantity of paired training data in the form of $(x,y)$, where $y$ is the dev set accuracy of the architecture $x$. To reduce computational cost, we share weights among different architectures BIBREF8 to aid the generation of such paired training data.
We use NAO to search powerful neural sequence-to-sequence architectures. The search space is illustrated in Fig. FIGREF13. Specifically, each network is composed of $N$ encoder layers and $N$ decoder layers. We set $N=6$ in our experiments. Each encoder layer further contains 2 nodes and each decoder layer contains 3 nodes. The node has two branches, respectively taking the output of other node as input, and applies a particular operator (OP), for example, identity, self-attention and convolution, to generate the output. The outputs of the two branches are added together as the output of the node. Each encoder layer contains two nodes while each decoder layer has three. For each layer, we search: 1) what is the operator at each branch of every node. For a comprehensive list of different OPs, please refer to the Appendix of this paper; 2) the topology of connection between nodes within each layer. In the middle part of Fig. FIGREF13, we plot possible connections within the nodes of a layer specified by all candidate architectures, with a particular highlight of Transformer BIBREF9.
To construct the final network, we do not adopt the typically used way of stacking the same layer multiple times. Instead we assume that layers in encoder/decoder could have different architectures and directly search such personalized architecture for each layer. We found that such a design significantly improves the performance due to the more flexibility.
Our Techniques ::: Soft contextual data augmentation (SCA)
SCA is a data augmentation technology for NMT BIBREF5, which replaces a randomly chosen word in a sentence with its soft version. For any word $w \in V$, its soft version is a distribution over the vocabulary of $|V|$ words: $P(w) = (p_1(w), p_2(w), ..., p_{|V|}(w))$, where $p_j(w) \ge 0$ and $\sum _{j=1}^{|V|}p_j(w) = 1$.
Given the distribution $P(w)$, one may simply sample a word from this distribution to replace the original word $w$. Different from this method, we directly use this distribution vector to replace the randomly chosen word $w$ from the original sentence. Suppose $E$ is the embedding matrix of all the $|V|$ words. The embedding of the soft version of $w$ is
which is the expectation of word embeddings over the distribution.
In our systems, we leverage a pre-trained language model to compute $P(w)$ and condition on all the words preceding $w$. That is, for the $t$-th word $x_t$ in a sentence, we have
where $LM(v_j|x_{<t})$ denotes the probability of the $j$-th word $v_j$ in the vocabulary appearing after the sequence $x_1, x_2, \cdots , x_{t-1}$. The language model is pre-trained using the monolingual data.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German
We submit constrained systems to both English to German and German to English translations, with the same techniques.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Dataset
We concatenate “Europarl v9”, “News Commentary v14”, “Common Crawl corpus” and “Document-split Rapid corpus” as the basic bilingual dataset (denoted as $\mathcal {B}_0$). Since “Paracrawl” data is noisy, we select 20M bilingual data from this corpus using the script filter_interactive.py. The two parts of bilingual data are concatenated together (denoted as $\mathcal {B}_1$). We clean $\mathcal {B}_1$ by normalizing the sentences, removing non-printable characters, and tokenization. We share a vocabulary for the two languages and apply BPE for word segmentation with 35000 merge operations. (We tried different BPE merge operations but found no significant differences.) For monolingual data, we use $120M$ English sentences (denoted as $\mathcal {M}_{\text{en}}$) and $120M$ German sentences (denoted as $\mathcal {M}_{\text{de}}$) from Newscrawl, and preprocess them in the same way as bilingual data. We use newstest 2016 and the validation set and newstest 2018 as the test set.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Model Configuration
We use the PyTorch implementation of Transformer. We choose the Transformer_big setting, in which both the encoder and decoder are of six layers. The dropout rate is fixed as $0.2$. We set the batchsize as 4096 and the parameter –update-freq as 16. We apply Adam BIBREF10 optimizer with learning rate $5\times 10^{-4}$.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Training Pipeline
The pipeline consists of three steps:
1. Pre-train two English$\rightarrow $German translation models (denoted as $\bar{f}_1$ and $\bar{f}_2$) and two German$\rightarrow $English translation models (denoted as $\bar{g}_1$ and $\bar{g}_2$) on $\mathcal {B}_1$; pre-train another English$\rightarrow $German (denoted as $\bar{f}_3$) and German$\rightarrow $English (denoted as $\bar{g}_3$) on $\mathcal {B}_0$.
2. Apply back translation following BIBREF11, BIBREF12. We back-translate $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ using $\bar{f}_3$ and $\bar{g}_3$ with beam search, add noise to the translated sentences BIBREF12, merge the synthetic data with $\mathcal {B}_1$, and train one English$\rightarrow $German model $f_0$ and one German$\rightarrow $English model $g_0$ for seven days on eight V100 GPUs.
3. Apply MADL to $f_0$ and $g_0$. That is, the $F_\alpha $ in Eqn.(DISPLAY_FORM8) is specified as the combination of $f_0,\bar{f}_1,\bar{f}_2$ with equal weights; and $G_\beta $ consists of $g_0,\bar{g}_1,\bar{g}_2$. During training, we will only update $f_0$ and $g_0$. To speed up training, we randomly select $20M$ monolingual English and German sentences from $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ respectively instead of using all monolingual sentences. The eventual output models are denoted as $f_1$ and $g_1$ respectively. This step takes 3 days on four P40 GPUs.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Results
The results are summarized in Table TABREF24, which are evaluated by sacreBLEU. The baseline is the average accuracy of models using only bitext, i.e., $\bar{f}_1$ and $\bar{f}_2$ for English$\rightarrow $German translation and $\bar{g}_1$ and $\bar{g}_2$ for German$\rightarrow $English, and BT is the accuracy of the model after back-translation training. As can be seen, back translation improves accuracy. For example, back-translation boosts the BLEU score from $45.6$ to $47.4$ on news18 English$\rightarrow $German translation, which is $1.8$ point improvement. MADL further boosts BLEU to $50.4$, obtaining another 3-point improvement, demonstrating the effectiveness of our method.
For the final submission, we accumulate many translation models (trained using bitext, back translation, and MADL, with different random seeds) and do knowledge distillation on the source sentences from WMT14 to WMT19 test sets. Take English$\rightarrow $German translation as an example. Denote the English inputs as $\mathcal {T}=\lbrace s_i\rbrace _{i=1}^{N_T}$, where $N_T$ is the size of the test set. For each $s$ in $\mathcal {T}$, we translate $s$ to $d^\prime $ using $M$ English$\rightarrow $German models and eventually obtain
where $f^{(j)}$ is the $j$-th translation model we accumulated, $\mathcal {T}$ is the combination of inputs from WMT14 to WMT19. After obtaining $\mathcal {E}$, we randomly select $N_TM$ bitext pairs (denoted as $\mathcal {B}_2$) from $\mathcal {B}_1$ and finetune model $f_1$ on $\mathcal {B}_2\cup \mathcal {E}$. We stop tuning when the BLEU scores of WMT16 (i.e., the validation set) drops.
We eventually obtain $44.9$ BLEU score for English$\rightarrow $German and $42.8$ for German$\rightarrow $English on WMT19 test sets and are ranked in the first place in these two translation tasks.
Submitted Systems ::: German@!START@$\leftrightarrow $@!END@French
For German$\leftrightarrow $French translation, we follow a similar process as the one used to English$\leftrightarrow $German tasks introduced in Section SECREF17. We merge the “commoncrawl”, “europarl-v7” and part of “de-fr.bicleaner07” selected by filter_interactive.py as the bilingual data. We collect $20M$ monolingual sentences for French and $20M$ for German from newscrawl. The data pre-processing rule and training procedure are the same as that used in Section SECREF17. We split $9k$ sentences from the “dev08_14” as the validation set and use the remaining ones as the test set.
The results of German$\leftrightarrow $French translation on the test set are summarized in Table TABREF27.
Again, our method achieves significant improvement over the baselines. Specifically, MADL boosts the baseline of German$\rightarrow $French and French$\rightarrow $German by 2 and $1.5$ points respectively.
Our submitted German$\rightarrow $French is a single system trained by MADL, achieving $37.3$ BLEU on WMT19. The French$\rightarrow $German is an ensemble of three independently trained models, achieving $35.0$ BLEU score. Our systems are ranked in the first place for both German$\rightarrow $French and French$\rightarrow $German in the leaderboard.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Dataset
For Chinese$\rightarrow $English translation, we use all the bilingual and monolingual data provided by the WMT official website, and also extra bilingual and monolingual data crawled from the web. We filter the total 24M bilingual pairs from WMT using the script filter_interactive.py as described in Section SECREF17 and get 18M sentence pairs. We use the Chinese monolingual data from XMU monolingual corpus and English monolingual data from News Crawl as well as the English sentences from all English-XX language pairs in WMT. We use 100M additional parallel sentences drawn from UN data, Open Subtitles and Web crawled data, which is filtered using the same filter rule described above, as well as fast align and in/out-domain filter. Finally we get 38M bilingual pairs. We also crawled 80M additional Chinese monolingual sentences from Sougou, China News, Xinhua News, Sina News, Ifeng News, and 2M English monolingual sentences from China News and Reuters. We use newstest2017 and newstest2018 on Chinese-English as development datasets.
We normalize the Chinese sentence from SBC case to DBC case, remove non-printable characters and tokenize with both Jieba and PKUSeg to increase diversity. For English sentences, we remove non-printable characters and tokenize with Moses tokenizer. We follow previous practice BIBREF13 and apply Byte-Pair Encoding (BPE) BIBREF14 separately for Chinese and English, each with 40K vocabulary.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: MASS Pre-training
We pre-train MASS (Transfomer_big) with both monolingual and bilingual data. We use 100M Chinese and 300M English monolingual sentences for the unsupervised setting (Equation DISPLAY_FORM10), and with a total of 18M and 56M bilingual sentence pairs for the supervised settings (Equation DISPLAY_FORM11). We share the encoder and decoder for all the losses in Equation DISPLAY_FORM10 and DISPLAY_FORM11. We then fine-tune the MASS pre-trained model on both 18M and 56M bilingual sentence pairs to get the baseline translation model for both Chinese$\rightarrow $English and English$\rightarrow $Chinese.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Back Translation and Knowledge Distillation
We randomly choose 40M monolingual sentences for Chinese and English respectively for back translation BIBREF11, BIBREF1 and knowledge distillation BIBREF15, BIBREF16. We iterate back translation and knowledge distillation multiple times, to gradually boost the performance of the model.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Results
The results on newstest2017 and newstest2018 are shown in Table TABREF37. We list two baseline Transformer_big systems which use 18M bilingual data (constraint) and 56M bilingual data (unconstraint) respectively. The pre-trained model achieves about 1 BLEU point improvement after fine-tuning on both 18M and 56M bilingual data. After iterative back translation (BT) and knowledge distillation (KD), as well as re-ranking, our system achieves 30.8 and 30.9 BLEU points on newstest2017 and newstest2018 respectively.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: WMT19 Submission
For the WMT19 submission, we conduct fine-tuning and speculation to further boost the accuracy by using the source sentences in the WMT19 test set. We first filter the bilingual as well as pseudo-generated data according to the relevance to the source sentences. We use the filter method in BIBREF17 and continue to train the model on the filtered data. Second, we conduct speculation on the test source sentences following the practice in BIBREF17. The final BLEU score of our submission is 39.3, ranked in the first place in the leaderboard.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Lithuanian
For English$\leftrightarrow $Lithuanian translation, we follow the similar process as that for Chinese$\rightarrow $English task introduced in Section SECREF28. We use all the WMT bilingual data, which is 2.24M after filtration. We use the same English monolingual data as used in Chinese-English. We select 100M Lithuanian monolingual data from official commoncrawl and use all the wiki and news Lithuanian monolingual data provided by WMT. In addition, we crawl 5M Lithuanian news data from LRT website. We share the BPE vocabulary between English and Lithuanian, and the vocabulary size is 65K.
All the bilingual and monolingual data are used for MASS pre-training, and all the bilingual data are used for fine-tuning. For iterative back translation and knowledge distillation, we split 24M English monolingual data as well as 12M Lithuanian monolingual data into 5 parts through sampling with replacement, to get different models independently so as to increase diversity in re-ranking/ensemble. Each model uses 8M English monolingual data and 6M Lithuanian monolingual data. For our WMT19 submission, different from zh-en, speculation technology is not used.
The BLEU scores on newsdev19 are shown in Table TABREF41. Our final submissions for WMT19 achieves 20.1 BLEU points for English$\rightarrow $Lithuanian translation (ranked in the first place) and 35.6 for Lithuanian$\rightarrow $English translation (ranked in the second place).
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Preprocess
We use the official English-Finnish data from WMT19, including both bilingual data and monolingual data. After de-duplicating, the bilingual data contains $8.8M$ aligned sentence pairs. We share the vocabulary for English and Finnish with $46k$ BPE units. We use the WMT17 and WMT18 English-Finnish test sets as two development datasets, and tune hyper-parameters based on the concatenation of them.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Architecture search
We use NAO to search sequence-to-sequence architectures for English-Finnish translation tasks, as introduced in subsection SECREF12. We use PyTorch for our implementations. Due to time limitations, we are not targeting at finding better neural architectures than Transformer; instead we target at models with comparable performance to Transformer, while providing diversity in the reranking process. The whole search process takes $2.5$ days on 16 P40 GPU cards and the discovered neural architecture, named as NAONet, is visualized in the Appendix.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Train single models
The final system for English-Finnish is obtained through reranking of three strong model checkpoints, respectively from the Transformer model decoding from left to right (L2R Transformer), the Transformer model decoding from right to left (R2L Transformer) and NAONet decoding from left to right. All the models have 6-6 layers in encoder/decoder, and are obtained using the same process which is detailed as below.
Step 1: Base models. Train two models $P_1(x|y)$ and $P_1(y|x)$ based on all the bilingual dataset ($8.8$M), respectively for English$\rightarrow $Finnish and Finnish$\rightarrow $English translations.
Step 2: Back translation. Do the normal back translation BIBREF11, BIBREF1 using $P_1$ and $P_2$. Specifically we choose $10M$ monolingual English corpus, use $P_1(y|x)$ to generate the $10M$ pseudo bitext with beam search (beam size is set to 5), and mix it with the bilingual data to continue the training of $P_1(x|y)$. The ratio of mixing is set as $1:1$ through up-sampling. The model obtained through such a process is denoted as $P_2(x|y)$. The same process is applied to the opposite direction and the new model $P_2(y|x)$ is attained.
Step 3: Back translation + knowledge distillation. In this step we generate more pseudo bitext by sequence level knowledge distillation BIBREF15 apart from using back translation. To be more concrete, as the first step, similar to Step 2, we choose $15M$ monolingual English and Finnish corpus, and generate the translations using $P_2(y|x)$ and $P_2(x|y)$, respectively. The resulting pseudo bitext is respectively denoted as $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$. Then we concatenate all the bilingual data, $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$, and use the whole corpus to train a new English-Finnish model from scratch. The attained model is denoted as $P_3(y|x)$.
Step 4: Finetune. In this step we try a very simple data selection method to handle the domain mismatch problem in WMT. We remove all the bilingual corpus from Paracrawl which is generally assumed to be quite noisy BIBREF18 and use the remaining bilingual corpus ($4.5M$) to finetune $P_3(y|x)$ for one epoch. The resulting model is denoted as $P_4(y|x)$ which is set as the final model checkpoint.
To investigate the effects of the four steps, we record the resulting BLEU scores on WMT17 and WMT18 test sets in Table TABREF46, taking the L2R Transformer model as an example. Furthermore, we report the final BLEU scores of the three models after the four steps in Table TABREF47. All the results are obtained via beam size 5 and length penalty $1.0$. The similar results for Finnish-English translation are shown in Table TABREF48.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Re-ranking
We use n-best re-ranking to deliver the final translation results using the three model checkpoints introduced in the last subsection. The beam size is set as 12. The weights of the three models, as well as the length penalty in generation, are tuned on the WMT-18 test sets. The results are shown in the second row of Table TABREF50.
We would also like to investigate what is the influence of the NAONet to the re-ranking results. To achieve that, in re-ranking we replace NAONet with another model from L2R Transformer, trained with the same process in subsection SECREF45 with the difference only in random seeds, while maintain the other two models unchanged. The results are illustrated in the last row of Table TABREF50. From the comparison of the two rows in Table TABREF50, we can see the new architecture NAONet discovered via NAO brings more diversity in the ranking, thus leading to better results. We also report the similar results for Finnish-English tasks in Table TABREF51.
Our systems achieve $27.4$ for and $31.9$ for English$\rightarrow $Finnish and Finnish$\rightarrow $English, ranked in the first place and second place (by teams), respectively.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Dataset
We use the bitext data from the several corpora: ParaCrawl, Common Crawl, News Commentary, Yandex Corpus, and UN Parallel Corpus. We also use News Crawl corpora as monolingual data. The data is filtered by rules such as sentence length, language identification, resulting a training dataset with 16M bilingual pairs and 40M monolingual sentences (20M for English and 20M for Russian). We use WMT17 and WMT18 test set as development data. The two languages use separate vocabularies, each with 50K BPE merge operations.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Our system
Our final system for Russian$\rightarrow $English translation is a combination of Transformer network BIBREF9, back translation BIBREF11, knowledge distillation BIBREF15, soft contextual data augmentation BIBREF5, and model ensemble. We use Transformer_big as network architecture. We first train two models, English$\rightarrow $Russian and Russian$\rightarrow $English respectively, on bilingual pairs as baseline model. Based on these two models, we perform back translation and knowledge distillation on monolingual data, generating 40M synthetic data. Combining both bilingual and synthetic data, we get a large train corpus with 56M pairs in total. We upsample the bilingual pairs and shuffle the combined corpus to ensure the balance between bilingual and synthetic data. Finally, we train the Russian$\rightarrow $English model from scratch. During the training, we also use soft contextual data augmentation to further enhance training. Following the above procedures, 5 different models are trained and ensembled for final submission.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Results
Our final submission achieves 40.1 BLEU score, ranked first in the leaderboard. Table TABREF56 reports the results of our system on the development set.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Dataset
We notice that most of the parallel data are out of domain. Therefore, we crawl some external data:
(1) We crawl all news articles from inform.kz, a Kazakh-English news website. Then we match an English new article to a Kazakh one by matching their images with image hashing. In this way, we find 10K pairs of bilingual news articles. We use their title as additional parallel data. These data are in-domain and useful in training.
(2) We crawl 140K parallel sentence pairs from glosbe.com. Although most of these sentences are out-of-domain, they significantly extended the size of our parallel dataset and lead to better results.
Because most of our parallel training data are noisy, we filter these data with some rules: (1) For the KazakhTV dataset, we remove any sentence pair with an alignment score less than 0.05. (2) For the Wiki Titles dataset, we remove any sentence pair that starts with User or NGC. (3) For all datasets, we remove any sentence pair in which the English sentence contains no lowercase alphabets. (4) For all datasets, we remove any sentence pair where the length ratio is greater than 2.5:1.
We tokenize all our data using the Moses Decoder. We learn a shared BPE BIBREF14 from all our data (including all WMT19 parallel data, WMT19 monolingual data, glosbe, inform.kz news titles, and inform.kz news contents) and get a shared vocabulary of 49,152 tokens. Finally, our dataset consists of 300K bilingual sentence pairs, 700K Kazakh monolingual sentences, and many English monolingual sentences.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Our system
Our model is based on the Transformer BIBREF9. We vary the hyper-parameters to increase the diversity of our model. Our models usually have 6 encoder layers, 6/7 decoder layers, ReLU/GELU BIBREF19 activation function, and an embedding dimension of 640.
We train 4 English-Kazakh models and 4 Kazakh-English models with different random seeds and hyper-parameters. Then we apply back-translation BIBREF12 and knowledge distillation BIBREF15 for 6 rounds. In each round, we
1. Sample 4M sentences from English monolingual data and back-translate them to Kazakh with the best EN-KK model (on the dev set) in the previous round.
2. Back-translate all Kazakh monolingual data to English with the best KK-EN model in the previous round.
3. Sample 200K sentences from English monolingual data and translate them to Kazakh using the ensemble of all EN-KK models in the previous round.
4. Train 4 English-Kazakh models with BT data from step 2 and KD data from step 3. We up-sample bilingual sentence pairs by 2x.
5. Train 4 Kazakh-English models with BT data from step 1. We up-sample bilingual sentence pairs by 3x.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Result
Our final submission achieves 10.6 BLEU score, ranked second by teams in the leaderboard.
Conclusions
This paper describes Microsoft Research Asia's neural machine translation systems for the WMT19 shared news translation tasks. Our systems are built on Transformer, back translation and knowledge distillation, enhanced with our recently proposed techniques: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA). Due to time and GPU limitations, we only apply each technique to a subset of translation tasks. We believe combining them together will further improve the translation accuracy and will conduct experiments in the future. Furthermore, some other techniques such as deliberation learning BIBREF20, adversarial learning BIBREF21, and reinforcement learning BIBREF22, BIBREF23 could also hep and are worthy of exploration.
Acknowledgments
This work is supported by Microsoft Machine Translation team. | Unanswerable |
eecf62e18a790bcfdd8a56f0c4f498927ff2fb47 | eecf62e18a790bcfdd8a56f0c4f498927ff2fb47_0 | Q: How does soft contextual data augmentation work?
Text: Introduction
We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\leftrightarrow $English, German$\leftrightarrow $French, Chinese$\leftrightarrow $English, English$\rightarrow $Lithuanian, English$\rightarrow $Finnish, and Russian$\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\rightarrow $English, Finnish$\rightarrow $English, and English$\rightarrow $Kazakh.
Our basic systems are based on Transformer, back translation and knowledge distillation. We experimented with several techniques we proposed recently. In brief, the innovations we introduced are:
Introduction ::: Multi-agent dual learning (MADL)
The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\mathcal {X}$ to domain $\mathcal {Y}$) and dual task (mapping from domain $\mathcal {Y}$ to $\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\leftrightarrow $English and German$\leftrightarrow $French translations.
Introduction ::: Masked sequence-to-sequence pretraining (MASS)
Pre-training and fine-tuning have achieved great success in language understanding. MASS BIBREF3, a pre-training method designed for language generation, adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. It was integrated into our submitted systems for Chinese$\rightarrow $English and English$\rightarrow $Lithuanian translations.
Introduction ::: Neural architecture optimization (NAO)
As well known, the evolution of neural network architecture plays a key role in advancing neural machine translation. Neural architecture optimization (NAO), our newly proposed method BIBREF4, leverages the power of a gradient-based method to conduct optimization and guide the creation of better neural architecture in a continuous and more compact space given the historically observed architectures and their performances. It was applied in English$\leftrightarrow $Finnish translations in our submitted systems.
Introduction ::: Soft contextual data augmentation (SCA)
While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is relatively limited. SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary. It was applied in Russian$\rightarrow $English translation in our submitted systems.
Our Techniques ::: Multi-agent dual learning (MADL)
MADL is an enhanced version of dual learning BIBREF1, BIBREF6. It leverages $N$ primal translation models $f_i$ and $N$ dual translation models $g_j$ for training, and eventually outputs one $f_0$ and one $g_0$ for inference, where $f_i:\mathcal {X}\mapsto \mathcal {Y},g_j:\mathcal {Y}\mapsto \mathcal {X}$, $i,j\in \lbrace 0,1,\cdots ,N-1\rbrace $. All these models are pre-trained on bilingual data . The $i$-th primal model $f_i$ has a non-negative weight $\alpha _i$ and the $j$-th dual model $g_i$ has a non-negative weight $\beta _j$. All the $\alpha _\cdot $'s and $\beta _\cdot $'s are hyper-parameters. Let $F_\alpha $ denote a combined translation model from $\mathcal {X}$ to $\mathcal {Y}$, and $G_\beta $ a combined translation model from $\mathcal {Y}$ to $\mathcal {X}$,
$F_\alpha $ and $G_\beta $ work as follows: for any $x\in \mathcal {X}$ and $y\in \mathcal {Y}$,
Let $\mathcal {B}$ denote the bilingual dataset. Let $\mathcal {M}_x$ and $\mathcal {M}_y$ denote the monolingual data of $\mathcal {X}$ and $\mathcal {Y}$. The training objective function of MADL can be written as follows:
Note that $f_{>0}$ and $g_{>0}$ will not be optimized during training and we eventually output $f_0$ and $g_0$ for translation. More details can be found in BIBREF0.
Our Techniques ::: Masked sequence-to-sequence pre-training (MASS)
MASS is a pre-training method for language generation. For machine translation, it can leverage monolingual data in two languages to pre-train a translation model. Given a sentence $x \in \mathcal {X}$, we denote $x^{\setminus u:v}$ as a modified version of $x$ where its fragment from position $u$ to $v$ are masked, $0<u<v<m$ and $m$ is the number of tokens of sentence $x$. We denote $k=v-u+1$ as the number of tokens being masked from position $u$ to $v$. We replace each masked token by a special symbol $[\mathbb {M}]$, and the length of the masked sentence is not changed. $x^{u:v}$ denotes the sentence fragment of $x$ from $u$ to $v$.
MASS pre-trains a sequence to sequence model by predicting the sentence fragment $x^{u:v}$ taking the masked sequence $x^{\setminus u:v}$ as input. We use the log likelihood as the objective function:
where $\mathcal {X}$, $\mathcal {Y}$ denote the source and target domain. In addition to zero/low-resource setting BIBREF7, we also extend MASS to supervised setting where bilingual sentence pair $(x, y) \in (\mathcal {X}, \mathcal {Y})$ can be leveraged for pre-training. The log likelihood in the supervised setting is as follows:
where $[\cdot ;\cdot ]$ represents the concatenation operation. $P(y|x^{\setminus u:v};\theta )$ and $P(x|y^{\setminus u:v};\theta )$ denote the probability of translating a masked sequence to another language, which encourage the encoder to extract meaningful representations of unmasked input tokens in order to predict the masked output sequence. $P(x^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ and $P(y^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ denote the probability of generating the masked source/target segment given both the masked source and target sequences, which encourage the model to extract cross-lingual information. $P(y^{u:v}|x^{\setminus u:v};\theta )$ and $P(x^{u:v}|y^{\setminus u:v};\theta )$ denote the probability of generating the masked fragment given only the masked sequence in another language. More details about MASS can be found in BIBREF3.
Our Techniques ::: Neural architecture optimization (NAO)
NAO BIBREF4 is a gradient based neural architecture search (NAS) method. It contains three key components: an encoder, an accuracy predictor, and a decoder, and optimizes a network architecture as follows. (1) The encoder maps a network architecture $x$ to an embedding vector $e_x$ in a continuous space $\mathcal {E}$. (2) The predictor, a function $f$, takes $e_x\in \mathcal {E}$ as input and predicts the dev set accuracy of the architecture $x$. We perform a gradient ascent step, i.e., moving $e_x$ along the direction specified via the gradient $\frac{\partial f}{\partial e_x}$, and get a new embedding vector $e_{x^{\prime }}$:
where $\eta $ is the step size. (3) The decoder is used to map $e_{x^{\prime }}$ back to the corresponding architecture $x^{\prime }$. The new architecture $x^{\prime }$ is assumed to have better performance compared with the original one $x$ due to the property of gradient ascent. NAO repeats the above three steps, and sequentially generates better and better architectures.
To learn high-quality encoder, decoder and performance prediction function, it is essential to have a large quantity of paired training data in the form of $(x,y)$, where $y$ is the dev set accuracy of the architecture $x$. To reduce computational cost, we share weights among different architectures BIBREF8 to aid the generation of such paired training data.
We use NAO to search powerful neural sequence-to-sequence architectures. The search space is illustrated in Fig. FIGREF13. Specifically, each network is composed of $N$ encoder layers and $N$ decoder layers. We set $N=6$ in our experiments. Each encoder layer further contains 2 nodes and each decoder layer contains 3 nodes. The node has two branches, respectively taking the output of other node as input, and applies a particular operator (OP), for example, identity, self-attention and convolution, to generate the output. The outputs of the two branches are added together as the output of the node. Each encoder layer contains two nodes while each decoder layer has three. For each layer, we search: 1) what is the operator at each branch of every node. For a comprehensive list of different OPs, please refer to the Appendix of this paper; 2) the topology of connection between nodes within each layer. In the middle part of Fig. FIGREF13, we plot possible connections within the nodes of a layer specified by all candidate architectures, with a particular highlight of Transformer BIBREF9.
To construct the final network, we do not adopt the typically used way of stacking the same layer multiple times. Instead we assume that layers in encoder/decoder could have different architectures and directly search such personalized architecture for each layer. We found that such a design significantly improves the performance due to the more flexibility.
Our Techniques ::: Soft contextual data augmentation (SCA)
SCA is a data augmentation technology for NMT BIBREF5, which replaces a randomly chosen word in a sentence with its soft version. For any word $w \in V$, its soft version is a distribution over the vocabulary of $|V|$ words: $P(w) = (p_1(w), p_2(w), ..., p_{|V|}(w))$, where $p_j(w) \ge 0$ and $\sum _{j=1}^{|V|}p_j(w) = 1$.
Given the distribution $P(w)$, one may simply sample a word from this distribution to replace the original word $w$. Different from this method, we directly use this distribution vector to replace the randomly chosen word $w$ from the original sentence. Suppose $E$ is the embedding matrix of all the $|V|$ words. The embedding of the soft version of $w$ is
which is the expectation of word embeddings over the distribution.
In our systems, we leverage a pre-trained language model to compute $P(w)$ and condition on all the words preceding $w$. That is, for the $t$-th word $x_t$ in a sentence, we have
where $LM(v_j|x_{<t})$ denotes the probability of the $j$-th word $v_j$ in the vocabulary appearing after the sequence $x_1, x_2, \cdots , x_{t-1}$. The language model is pre-trained using the monolingual data.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German
We submit constrained systems to both English to German and German to English translations, with the same techniques.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Dataset
We concatenate “Europarl v9”, “News Commentary v14”, “Common Crawl corpus” and “Document-split Rapid corpus” as the basic bilingual dataset (denoted as $\mathcal {B}_0$). Since “Paracrawl” data is noisy, we select 20M bilingual data from this corpus using the script filter_interactive.py. The two parts of bilingual data are concatenated together (denoted as $\mathcal {B}_1$). We clean $\mathcal {B}_1$ by normalizing the sentences, removing non-printable characters, and tokenization. We share a vocabulary for the two languages and apply BPE for word segmentation with 35000 merge operations. (We tried different BPE merge operations but found no significant differences.) For monolingual data, we use $120M$ English sentences (denoted as $\mathcal {M}_{\text{en}}$) and $120M$ German sentences (denoted as $\mathcal {M}_{\text{de}}$) from Newscrawl, and preprocess them in the same way as bilingual data. We use newstest 2016 and the validation set and newstest 2018 as the test set.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Model Configuration
We use the PyTorch implementation of Transformer. We choose the Transformer_big setting, in which both the encoder and decoder are of six layers. The dropout rate is fixed as $0.2$. We set the batchsize as 4096 and the parameter –update-freq as 16. We apply Adam BIBREF10 optimizer with learning rate $5\times 10^{-4}$.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Training Pipeline
The pipeline consists of three steps:
1. Pre-train two English$\rightarrow $German translation models (denoted as $\bar{f}_1$ and $\bar{f}_2$) and two German$\rightarrow $English translation models (denoted as $\bar{g}_1$ and $\bar{g}_2$) on $\mathcal {B}_1$; pre-train another English$\rightarrow $German (denoted as $\bar{f}_3$) and German$\rightarrow $English (denoted as $\bar{g}_3$) on $\mathcal {B}_0$.
2. Apply back translation following BIBREF11, BIBREF12. We back-translate $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ using $\bar{f}_3$ and $\bar{g}_3$ with beam search, add noise to the translated sentences BIBREF12, merge the synthetic data with $\mathcal {B}_1$, and train one English$\rightarrow $German model $f_0$ and one German$\rightarrow $English model $g_0$ for seven days on eight V100 GPUs.
3. Apply MADL to $f_0$ and $g_0$. That is, the $F_\alpha $ in Eqn.(DISPLAY_FORM8) is specified as the combination of $f_0,\bar{f}_1,\bar{f}_2$ with equal weights; and $G_\beta $ consists of $g_0,\bar{g}_1,\bar{g}_2$. During training, we will only update $f_0$ and $g_0$. To speed up training, we randomly select $20M$ monolingual English and German sentences from $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ respectively instead of using all monolingual sentences. The eventual output models are denoted as $f_1$ and $g_1$ respectively. This step takes 3 days on four P40 GPUs.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Results
The results are summarized in Table TABREF24, which are evaluated by sacreBLEU. The baseline is the average accuracy of models using only bitext, i.e., $\bar{f}_1$ and $\bar{f}_2$ for English$\rightarrow $German translation and $\bar{g}_1$ and $\bar{g}_2$ for German$\rightarrow $English, and BT is the accuracy of the model after back-translation training. As can be seen, back translation improves accuracy. For example, back-translation boosts the BLEU score from $45.6$ to $47.4$ on news18 English$\rightarrow $German translation, which is $1.8$ point improvement. MADL further boosts BLEU to $50.4$, obtaining another 3-point improvement, demonstrating the effectiveness of our method.
For the final submission, we accumulate many translation models (trained using bitext, back translation, and MADL, with different random seeds) and do knowledge distillation on the source sentences from WMT14 to WMT19 test sets. Take English$\rightarrow $German translation as an example. Denote the English inputs as $\mathcal {T}=\lbrace s_i\rbrace _{i=1}^{N_T}$, where $N_T$ is the size of the test set. For each $s$ in $\mathcal {T}$, we translate $s$ to $d^\prime $ using $M$ English$\rightarrow $German models and eventually obtain
where $f^{(j)}$ is the $j$-th translation model we accumulated, $\mathcal {T}$ is the combination of inputs from WMT14 to WMT19. After obtaining $\mathcal {E}$, we randomly select $N_TM$ bitext pairs (denoted as $\mathcal {B}_2$) from $\mathcal {B}_1$ and finetune model $f_1$ on $\mathcal {B}_2\cup \mathcal {E}$. We stop tuning when the BLEU scores of WMT16 (i.e., the validation set) drops.
We eventually obtain $44.9$ BLEU score for English$\rightarrow $German and $42.8$ for German$\rightarrow $English on WMT19 test sets and are ranked in the first place in these two translation tasks.
Submitted Systems ::: German@!START@$\leftrightarrow $@!END@French
For German$\leftrightarrow $French translation, we follow a similar process as the one used to English$\leftrightarrow $German tasks introduced in Section SECREF17. We merge the “commoncrawl”, “europarl-v7” and part of “de-fr.bicleaner07” selected by filter_interactive.py as the bilingual data. We collect $20M$ monolingual sentences for French and $20M$ for German from newscrawl. The data pre-processing rule and training procedure are the same as that used in Section SECREF17. We split $9k$ sentences from the “dev08_14” as the validation set and use the remaining ones as the test set.
The results of German$\leftrightarrow $French translation on the test set are summarized in Table TABREF27.
Again, our method achieves significant improvement over the baselines. Specifically, MADL boosts the baseline of German$\rightarrow $French and French$\rightarrow $German by 2 and $1.5$ points respectively.
Our submitted German$\rightarrow $French is a single system trained by MADL, achieving $37.3$ BLEU on WMT19. The French$\rightarrow $German is an ensemble of three independently trained models, achieving $35.0$ BLEU score. Our systems are ranked in the first place for both German$\rightarrow $French and French$\rightarrow $German in the leaderboard.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Dataset
For Chinese$\rightarrow $English translation, we use all the bilingual and monolingual data provided by the WMT official website, and also extra bilingual and monolingual data crawled from the web. We filter the total 24M bilingual pairs from WMT using the script filter_interactive.py as described in Section SECREF17 and get 18M sentence pairs. We use the Chinese monolingual data from XMU monolingual corpus and English monolingual data from News Crawl as well as the English sentences from all English-XX language pairs in WMT. We use 100M additional parallel sentences drawn from UN data, Open Subtitles and Web crawled data, which is filtered using the same filter rule described above, as well as fast align and in/out-domain filter. Finally we get 38M bilingual pairs. We also crawled 80M additional Chinese monolingual sentences from Sougou, China News, Xinhua News, Sina News, Ifeng News, and 2M English monolingual sentences from China News and Reuters. We use newstest2017 and newstest2018 on Chinese-English as development datasets.
We normalize the Chinese sentence from SBC case to DBC case, remove non-printable characters and tokenize with both Jieba and PKUSeg to increase diversity. For English sentences, we remove non-printable characters and tokenize with Moses tokenizer. We follow previous practice BIBREF13 and apply Byte-Pair Encoding (BPE) BIBREF14 separately for Chinese and English, each with 40K vocabulary.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: MASS Pre-training
We pre-train MASS (Transfomer_big) with both monolingual and bilingual data. We use 100M Chinese and 300M English monolingual sentences for the unsupervised setting (Equation DISPLAY_FORM10), and with a total of 18M and 56M bilingual sentence pairs for the supervised settings (Equation DISPLAY_FORM11). We share the encoder and decoder for all the losses in Equation DISPLAY_FORM10 and DISPLAY_FORM11. We then fine-tune the MASS pre-trained model on both 18M and 56M bilingual sentence pairs to get the baseline translation model for both Chinese$\rightarrow $English and English$\rightarrow $Chinese.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Back Translation and Knowledge Distillation
We randomly choose 40M monolingual sentences for Chinese and English respectively for back translation BIBREF11, BIBREF1 and knowledge distillation BIBREF15, BIBREF16. We iterate back translation and knowledge distillation multiple times, to gradually boost the performance of the model.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Results
The results on newstest2017 and newstest2018 are shown in Table TABREF37. We list two baseline Transformer_big systems which use 18M bilingual data (constraint) and 56M bilingual data (unconstraint) respectively. The pre-trained model achieves about 1 BLEU point improvement after fine-tuning on both 18M and 56M bilingual data. After iterative back translation (BT) and knowledge distillation (KD), as well as re-ranking, our system achieves 30.8 and 30.9 BLEU points on newstest2017 and newstest2018 respectively.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: WMT19 Submission
For the WMT19 submission, we conduct fine-tuning and speculation to further boost the accuracy by using the source sentences in the WMT19 test set. We first filter the bilingual as well as pseudo-generated data according to the relevance to the source sentences. We use the filter method in BIBREF17 and continue to train the model on the filtered data. Second, we conduct speculation on the test source sentences following the practice in BIBREF17. The final BLEU score of our submission is 39.3, ranked in the first place in the leaderboard.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Lithuanian
For English$\leftrightarrow $Lithuanian translation, we follow the similar process as that for Chinese$\rightarrow $English task introduced in Section SECREF28. We use all the WMT bilingual data, which is 2.24M after filtration. We use the same English monolingual data as used in Chinese-English. We select 100M Lithuanian monolingual data from official commoncrawl and use all the wiki and news Lithuanian monolingual data provided by WMT. In addition, we crawl 5M Lithuanian news data from LRT website. We share the BPE vocabulary between English and Lithuanian, and the vocabulary size is 65K.
All the bilingual and monolingual data are used for MASS pre-training, and all the bilingual data are used for fine-tuning. For iterative back translation and knowledge distillation, we split 24M English monolingual data as well as 12M Lithuanian monolingual data into 5 parts through sampling with replacement, to get different models independently so as to increase diversity in re-ranking/ensemble. Each model uses 8M English monolingual data and 6M Lithuanian monolingual data. For our WMT19 submission, different from zh-en, speculation technology is not used.
The BLEU scores on newsdev19 are shown in Table TABREF41. Our final submissions for WMT19 achieves 20.1 BLEU points for English$\rightarrow $Lithuanian translation (ranked in the first place) and 35.6 for Lithuanian$\rightarrow $English translation (ranked in the second place).
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Preprocess
We use the official English-Finnish data from WMT19, including both bilingual data and monolingual data. After de-duplicating, the bilingual data contains $8.8M$ aligned sentence pairs. We share the vocabulary for English and Finnish with $46k$ BPE units. We use the WMT17 and WMT18 English-Finnish test sets as two development datasets, and tune hyper-parameters based on the concatenation of them.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Architecture search
We use NAO to search sequence-to-sequence architectures for English-Finnish translation tasks, as introduced in subsection SECREF12. We use PyTorch for our implementations. Due to time limitations, we are not targeting at finding better neural architectures than Transformer; instead we target at models with comparable performance to Transformer, while providing diversity in the reranking process. The whole search process takes $2.5$ days on 16 P40 GPU cards and the discovered neural architecture, named as NAONet, is visualized in the Appendix.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Train single models
The final system for English-Finnish is obtained through reranking of three strong model checkpoints, respectively from the Transformer model decoding from left to right (L2R Transformer), the Transformer model decoding from right to left (R2L Transformer) and NAONet decoding from left to right. All the models have 6-6 layers in encoder/decoder, and are obtained using the same process which is detailed as below.
Step 1: Base models. Train two models $P_1(x|y)$ and $P_1(y|x)$ based on all the bilingual dataset ($8.8$M), respectively for English$\rightarrow $Finnish and Finnish$\rightarrow $English translations.
Step 2: Back translation. Do the normal back translation BIBREF11, BIBREF1 using $P_1$ and $P_2$. Specifically we choose $10M$ monolingual English corpus, use $P_1(y|x)$ to generate the $10M$ pseudo bitext with beam search (beam size is set to 5), and mix it with the bilingual data to continue the training of $P_1(x|y)$. The ratio of mixing is set as $1:1$ through up-sampling. The model obtained through such a process is denoted as $P_2(x|y)$. The same process is applied to the opposite direction and the new model $P_2(y|x)$ is attained.
Step 3: Back translation + knowledge distillation. In this step we generate more pseudo bitext by sequence level knowledge distillation BIBREF15 apart from using back translation. To be more concrete, as the first step, similar to Step 2, we choose $15M$ monolingual English and Finnish corpus, and generate the translations using $P_2(y|x)$ and $P_2(x|y)$, respectively. The resulting pseudo bitext is respectively denoted as $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$. Then we concatenate all the bilingual data, $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$, and use the whole corpus to train a new English-Finnish model from scratch. The attained model is denoted as $P_3(y|x)$.
Step 4: Finetune. In this step we try a very simple data selection method to handle the domain mismatch problem in WMT. We remove all the bilingual corpus from Paracrawl which is generally assumed to be quite noisy BIBREF18 and use the remaining bilingual corpus ($4.5M$) to finetune $P_3(y|x)$ for one epoch. The resulting model is denoted as $P_4(y|x)$ which is set as the final model checkpoint.
To investigate the effects of the four steps, we record the resulting BLEU scores on WMT17 and WMT18 test sets in Table TABREF46, taking the L2R Transformer model as an example. Furthermore, we report the final BLEU scores of the three models after the four steps in Table TABREF47. All the results are obtained via beam size 5 and length penalty $1.0$. The similar results for Finnish-English translation are shown in Table TABREF48.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Re-ranking
We use n-best re-ranking to deliver the final translation results using the three model checkpoints introduced in the last subsection. The beam size is set as 12. The weights of the three models, as well as the length penalty in generation, are tuned on the WMT-18 test sets. The results are shown in the second row of Table TABREF50.
We would also like to investigate what is the influence of the NAONet to the re-ranking results. To achieve that, in re-ranking we replace NAONet with another model from L2R Transformer, trained with the same process in subsection SECREF45 with the difference only in random seeds, while maintain the other two models unchanged. The results are illustrated in the last row of Table TABREF50. From the comparison of the two rows in Table TABREF50, we can see the new architecture NAONet discovered via NAO brings more diversity in the ranking, thus leading to better results. We also report the similar results for Finnish-English tasks in Table TABREF51.
Our systems achieve $27.4$ for and $31.9$ for English$\rightarrow $Finnish and Finnish$\rightarrow $English, ranked in the first place and second place (by teams), respectively.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Dataset
We use the bitext data from the several corpora: ParaCrawl, Common Crawl, News Commentary, Yandex Corpus, and UN Parallel Corpus. We also use News Crawl corpora as monolingual data. The data is filtered by rules such as sentence length, language identification, resulting a training dataset with 16M bilingual pairs and 40M monolingual sentences (20M for English and 20M for Russian). We use WMT17 and WMT18 test set as development data. The two languages use separate vocabularies, each with 50K BPE merge operations.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Our system
Our final system for Russian$\rightarrow $English translation is a combination of Transformer network BIBREF9, back translation BIBREF11, knowledge distillation BIBREF15, soft contextual data augmentation BIBREF5, and model ensemble. We use Transformer_big as network architecture. We first train two models, English$\rightarrow $Russian and Russian$\rightarrow $English respectively, on bilingual pairs as baseline model. Based on these two models, we perform back translation and knowledge distillation on monolingual data, generating 40M synthetic data. Combining both bilingual and synthetic data, we get a large train corpus with 56M pairs in total. We upsample the bilingual pairs and shuffle the combined corpus to ensure the balance between bilingual and synthetic data. Finally, we train the Russian$\rightarrow $English model from scratch. During the training, we also use soft contextual data augmentation to further enhance training. Following the above procedures, 5 different models are trained and ensembled for final submission.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Results
Our final submission achieves 40.1 BLEU score, ranked first in the leaderboard. Table TABREF56 reports the results of our system on the development set.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Dataset
We notice that most of the parallel data are out of domain. Therefore, we crawl some external data:
(1) We crawl all news articles from inform.kz, a Kazakh-English news website. Then we match an English new article to a Kazakh one by matching their images with image hashing. In this way, we find 10K pairs of bilingual news articles. We use their title as additional parallel data. These data are in-domain and useful in training.
(2) We crawl 140K parallel sentence pairs from glosbe.com. Although most of these sentences are out-of-domain, they significantly extended the size of our parallel dataset and lead to better results.
Because most of our parallel training data are noisy, we filter these data with some rules: (1) For the KazakhTV dataset, we remove any sentence pair with an alignment score less than 0.05. (2) For the Wiki Titles dataset, we remove any sentence pair that starts with User or NGC. (3) For all datasets, we remove any sentence pair in which the English sentence contains no lowercase alphabets. (4) For all datasets, we remove any sentence pair where the length ratio is greater than 2.5:1.
We tokenize all our data using the Moses Decoder. We learn a shared BPE BIBREF14 from all our data (including all WMT19 parallel data, WMT19 monolingual data, glosbe, inform.kz news titles, and inform.kz news contents) and get a shared vocabulary of 49,152 tokens. Finally, our dataset consists of 300K bilingual sentence pairs, 700K Kazakh monolingual sentences, and many English monolingual sentences.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Our system
Our model is based on the Transformer BIBREF9. We vary the hyper-parameters to increase the diversity of our model. Our models usually have 6 encoder layers, 6/7 decoder layers, ReLU/GELU BIBREF19 activation function, and an embedding dimension of 640.
We train 4 English-Kazakh models and 4 Kazakh-English models with different random seeds and hyper-parameters. Then we apply back-translation BIBREF12 and knowledge distillation BIBREF15 for 6 rounds. In each round, we
1. Sample 4M sentences from English monolingual data and back-translate them to Kazakh with the best EN-KK model (on the dev set) in the previous round.
2. Back-translate all Kazakh monolingual data to English with the best KK-EN model in the previous round.
3. Sample 200K sentences from English monolingual data and translate them to Kazakh using the ensemble of all EN-KK models in the previous round.
4. Train 4 English-Kazakh models with BT data from step 2 and KD data from step 3. We up-sample bilingual sentence pairs by 2x.
5. Train 4 Kazakh-English models with BT data from step 1. We up-sample bilingual sentence pairs by 3x.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Result
Our final submission achieves 10.6 BLEU score, ranked second by teams in the leaderboard.
Conclusions
This paper describes Microsoft Research Asia's neural machine translation systems for the WMT19 shared news translation tasks. Our systems are built on Transformer, back translation and knowledge distillation, enhanced with our recently proposed techniques: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA). Due to time and GPU limitations, we only apply each technique to a subset of translation tasks. We believe combining them together will further improve the translation accuracy and will conduct experiments in the future. Furthermore, some other techniques such as deliberation learning BIBREF20, adversarial learning BIBREF21, and reinforcement learning BIBREF22, BIBREF23 could also hep and are worthy of exploration.
Acknowledgments
This work is supported by Microsoft Machine Translation team. | softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary |
acda028a21a465c984036dcbb124b7f03c490b41 | acda028a21a465c984036dcbb124b7f03c490b41_0 | Q: How does muli-agent dual learning work?
Text: Introduction
We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\leftrightarrow $English, German$\leftrightarrow $French, Chinese$\leftrightarrow $English, English$\rightarrow $Lithuanian, English$\rightarrow $Finnish, and Russian$\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\rightarrow $English, Finnish$\rightarrow $English, and English$\rightarrow $Kazakh.
Our basic systems are based on Transformer, back translation and knowledge distillation. We experimented with several techniques we proposed recently. In brief, the innovations we introduced are:
Introduction ::: Multi-agent dual learning (MADL)
The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\mathcal {X}$ to domain $\mathcal {Y}$) and dual task (mapping from domain $\mathcal {Y}$ to $\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\leftrightarrow $English and German$\leftrightarrow $French translations.
Introduction ::: Masked sequence-to-sequence pretraining (MASS)
Pre-training and fine-tuning have achieved great success in language understanding. MASS BIBREF3, a pre-training method designed for language generation, adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. It was integrated into our submitted systems for Chinese$\rightarrow $English and English$\rightarrow $Lithuanian translations.
Introduction ::: Neural architecture optimization (NAO)
As well known, the evolution of neural network architecture plays a key role in advancing neural machine translation. Neural architecture optimization (NAO), our newly proposed method BIBREF4, leverages the power of a gradient-based method to conduct optimization and guide the creation of better neural architecture in a continuous and more compact space given the historically observed architectures and their performances. It was applied in English$\leftrightarrow $Finnish translations in our submitted systems.
Introduction ::: Soft contextual data augmentation (SCA)
While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is relatively limited. SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary. It was applied in Russian$\rightarrow $English translation in our submitted systems.
Our Techniques ::: Multi-agent dual learning (MADL)
MADL is an enhanced version of dual learning BIBREF1, BIBREF6. It leverages $N$ primal translation models $f_i$ and $N$ dual translation models $g_j$ for training, and eventually outputs one $f_0$ and one $g_0$ for inference, where $f_i:\mathcal {X}\mapsto \mathcal {Y},g_j:\mathcal {Y}\mapsto \mathcal {X}$, $i,j\in \lbrace 0,1,\cdots ,N-1\rbrace $. All these models are pre-trained on bilingual data . The $i$-th primal model $f_i$ has a non-negative weight $\alpha _i$ and the $j$-th dual model $g_i$ has a non-negative weight $\beta _j$. All the $\alpha _\cdot $'s and $\beta _\cdot $'s are hyper-parameters. Let $F_\alpha $ denote a combined translation model from $\mathcal {X}$ to $\mathcal {Y}$, and $G_\beta $ a combined translation model from $\mathcal {Y}$ to $\mathcal {X}$,
$F_\alpha $ and $G_\beta $ work as follows: for any $x\in \mathcal {X}$ and $y\in \mathcal {Y}$,
Let $\mathcal {B}$ denote the bilingual dataset. Let $\mathcal {M}_x$ and $\mathcal {M}_y$ denote the monolingual data of $\mathcal {X}$ and $\mathcal {Y}$. The training objective function of MADL can be written as follows:
Note that $f_{>0}$ and $g_{>0}$ will not be optimized during training and we eventually output $f_0$ and $g_0$ for translation. More details can be found in BIBREF0.
Our Techniques ::: Masked sequence-to-sequence pre-training (MASS)
MASS is a pre-training method for language generation. For machine translation, it can leverage monolingual data in two languages to pre-train a translation model. Given a sentence $x \in \mathcal {X}$, we denote $x^{\setminus u:v}$ as a modified version of $x$ where its fragment from position $u$ to $v$ are masked, $0<u<v<m$ and $m$ is the number of tokens of sentence $x$. We denote $k=v-u+1$ as the number of tokens being masked from position $u$ to $v$. We replace each masked token by a special symbol $[\mathbb {M}]$, and the length of the masked sentence is not changed. $x^{u:v}$ denotes the sentence fragment of $x$ from $u$ to $v$.
MASS pre-trains a sequence to sequence model by predicting the sentence fragment $x^{u:v}$ taking the masked sequence $x^{\setminus u:v}$ as input. We use the log likelihood as the objective function:
where $\mathcal {X}$, $\mathcal {Y}$ denote the source and target domain. In addition to zero/low-resource setting BIBREF7, we also extend MASS to supervised setting where bilingual sentence pair $(x, y) \in (\mathcal {X}, \mathcal {Y})$ can be leveraged for pre-training. The log likelihood in the supervised setting is as follows:
where $[\cdot ;\cdot ]$ represents the concatenation operation. $P(y|x^{\setminus u:v};\theta )$ and $P(x|y^{\setminus u:v};\theta )$ denote the probability of translating a masked sequence to another language, which encourage the encoder to extract meaningful representations of unmasked input tokens in order to predict the masked output sequence. $P(x^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ and $P(y^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ denote the probability of generating the masked source/target segment given both the masked source and target sequences, which encourage the model to extract cross-lingual information. $P(y^{u:v}|x^{\setminus u:v};\theta )$ and $P(x^{u:v}|y^{\setminus u:v};\theta )$ denote the probability of generating the masked fragment given only the masked sequence in another language. More details about MASS can be found in BIBREF3.
Our Techniques ::: Neural architecture optimization (NAO)
NAO BIBREF4 is a gradient based neural architecture search (NAS) method. It contains three key components: an encoder, an accuracy predictor, and a decoder, and optimizes a network architecture as follows. (1) The encoder maps a network architecture $x$ to an embedding vector $e_x$ in a continuous space $\mathcal {E}$. (2) The predictor, a function $f$, takes $e_x\in \mathcal {E}$ as input and predicts the dev set accuracy of the architecture $x$. We perform a gradient ascent step, i.e., moving $e_x$ along the direction specified via the gradient $\frac{\partial f}{\partial e_x}$, and get a new embedding vector $e_{x^{\prime }}$:
where $\eta $ is the step size. (3) The decoder is used to map $e_{x^{\prime }}$ back to the corresponding architecture $x^{\prime }$. The new architecture $x^{\prime }$ is assumed to have better performance compared with the original one $x$ due to the property of gradient ascent. NAO repeats the above three steps, and sequentially generates better and better architectures.
To learn high-quality encoder, decoder and performance prediction function, it is essential to have a large quantity of paired training data in the form of $(x,y)$, where $y$ is the dev set accuracy of the architecture $x$. To reduce computational cost, we share weights among different architectures BIBREF8 to aid the generation of such paired training data.
We use NAO to search powerful neural sequence-to-sequence architectures. The search space is illustrated in Fig. FIGREF13. Specifically, each network is composed of $N$ encoder layers and $N$ decoder layers. We set $N=6$ in our experiments. Each encoder layer further contains 2 nodes and each decoder layer contains 3 nodes. The node has two branches, respectively taking the output of other node as input, and applies a particular operator (OP), for example, identity, self-attention and convolution, to generate the output. The outputs of the two branches are added together as the output of the node. Each encoder layer contains two nodes while each decoder layer has three. For each layer, we search: 1) what is the operator at each branch of every node. For a comprehensive list of different OPs, please refer to the Appendix of this paper; 2) the topology of connection between nodes within each layer. In the middle part of Fig. FIGREF13, we plot possible connections within the nodes of a layer specified by all candidate architectures, with a particular highlight of Transformer BIBREF9.
To construct the final network, we do not adopt the typically used way of stacking the same layer multiple times. Instead we assume that layers in encoder/decoder could have different architectures and directly search such personalized architecture for each layer. We found that such a design significantly improves the performance due to the more flexibility.
Our Techniques ::: Soft contextual data augmentation (SCA)
SCA is a data augmentation technology for NMT BIBREF5, which replaces a randomly chosen word in a sentence with its soft version. For any word $w \in V$, its soft version is a distribution over the vocabulary of $|V|$ words: $P(w) = (p_1(w), p_2(w), ..., p_{|V|}(w))$, where $p_j(w) \ge 0$ and $\sum _{j=1}^{|V|}p_j(w) = 1$.
Given the distribution $P(w)$, one may simply sample a word from this distribution to replace the original word $w$. Different from this method, we directly use this distribution vector to replace the randomly chosen word $w$ from the original sentence. Suppose $E$ is the embedding matrix of all the $|V|$ words. The embedding of the soft version of $w$ is
which is the expectation of word embeddings over the distribution.
In our systems, we leverage a pre-trained language model to compute $P(w)$ and condition on all the words preceding $w$. That is, for the $t$-th word $x_t$ in a sentence, we have
where $LM(v_j|x_{<t})$ denotes the probability of the $j$-th word $v_j$ in the vocabulary appearing after the sequence $x_1, x_2, \cdots , x_{t-1}$. The language model is pre-trained using the monolingual data.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German
We submit constrained systems to both English to German and German to English translations, with the same techniques.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Dataset
We concatenate “Europarl v9”, “News Commentary v14”, “Common Crawl corpus” and “Document-split Rapid corpus” as the basic bilingual dataset (denoted as $\mathcal {B}_0$). Since “Paracrawl” data is noisy, we select 20M bilingual data from this corpus using the script filter_interactive.py. The two parts of bilingual data are concatenated together (denoted as $\mathcal {B}_1$). We clean $\mathcal {B}_1$ by normalizing the sentences, removing non-printable characters, and tokenization. We share a vocabulary for the two languages and apply BPE for word segmentation with 35000 merge operations. (We tried different BPE merge operations but found no significant differences.) For monolingual data, we use $120M$ English sentences (denoted as $\mathcal {M}_{\text{en}}$) and $120M$ German sentences (denoted as $\mathcal {M}_{\text{de}}$) from Newscrawl, and preprocess them in the same way as bilingual data. We use newstest 2016 and the validation set and newstest 2018 as the test set.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Model Configuration
We use the PyTorch implementation of Transformer. We choose the Transformer_big setting, in which both the encoder and decoder are of six layers. The dropout rate is fixed as $0.2$. We set the batchsize as 4096 and the parameter –update-freq as 16. We apply Adam BIBREF10 optimizer with learning rate $5\times 10^{-4}$.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Training Pipeline
The pipeline consists of three steps:
1. Pre-train two English$\rightarrow $German translation models (denoted as $\bar{f}_1$ and $\bar{f}_2$) and two German$\rightarrow $English translation models (denoted as $\bar{g}_1$ and $\bar{g}_2$) on $\mathcal {B}_1$; pre-train another English$\rightarrow $German (denoted as $\bar{f}_3$) and German$\rightarrow $English (denoted as $\bar{g}_3$) on $\mathcal {B}_0$.
2. Apply back translation following BIBREF11, BIBREF12. We back-translate $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ using $\bar{f}_3$ and $\bar{g}_3$ with beam search, add noise to the translated sentences BIBREF12, merge the synthetic data with $\mathcal {B}_1$, and train one English$\rightarrow $German model $f_0$ and one German$\rightarrow $English model $g_0$ for seven days on eight V100 GPUs.
3. Apply MADL to $f_0$ and $g_0$. That is, the $F_\alpha $ in Eqn.(DISPLAY_FORM8) is specified as the combination of $f_0,\bar{f}_1,\bar{f}_2$ with equal weights; and $G_\beta $ consists of $g_0,\bar{g}_1,\bar{g}_2$. During training, we will only update $f_0$ and $g_0$. To speed up training, we randomly select $20M$ monolingual English and German sentences from $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ respectively instead of using all monolingual sentences. The eventual output models are denoted as $f_1$ and $g_1$ respectively. This step takes 3 days on four P40 GPUs.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Results
The results are summarized in Table TABREF24, which are evaluated by sacreBLEU. The baseline is the average accuracy of models using only bitext, i.e., $\bar{f}_1$ and $\bar{f}_2$ for English$\rightarrow $German translation and $\bar{g}_1$ and $\bar{g}_2$ for German$\rightarrow $English, and BT is the accuracy of the model after back-translation training. As can be seen, back translation improves accuracy. For example, back-translation boosts the BLEU score from $45.6$ to $47.4$ on news18 English$\rightarrow $German translation, which is $1.8$ point improvement. MADL further boosts BLEU to $50.4$, obtaining another 3-point improvement, demonstrating the effectiveness of our method.
For the final submission, we accumulate many translation models (trained using bitext, back translation, and MADL, with different random seeds) and do knowledge distillation on the source sentences from WMT14 to WMT19 test sets. Take English$\rightarrow $German translation as an example. Denote the English inputs as $\mathcal {T}=\lbrace s_i\rbrace _{i=1}^{N_T}$, where $N_T$ is the size of the test set. For each $s$ in $\mathcal {T}$, we translate $s$ to $d^\prime $ using $M$ English$\rightarrow $German models and eventually obtain
where $f^{(j)}$ is the $j$-th translation model we accumulated, $\mathcal {T}$ is the combination of inputs from WMT14 to WMT19. After obtaining $\mathcal {E}$, we randomly select $N_TM$ bitext pairs (denoted as $\mathcal {B}_2$) from $\mathcal {B}_1$ and finetune model $f_1$ on $\mathcal {B}_2\cup \mathcal {E}$. We stop tuning when the BLEU scores of WMT16 (i.e., the validation set) drops.
We eventually obtain $44.9$ BLEU score for English$\rightarrow $German and $42.8$ for German$\rightarrow $English on WMT19 test sets and are ranked in the first place in these two translation tasks.
Submitted Systems ::: German@!START@$\leftrightarrow $@!END@French
For German$\leftrightarrow $French translation, we follow a similar process as the one used to English$\leftrightarrow $German tasks introduced in Section SECREF17. We merge the “commoncrawl”, “europarl-v7” and part of “de-fr.bicleaner07” selected by filter_interactive.py as the bilingual data. We collect $20M$ monolingual sentences for French and $20M$ for German from newscrawl. The data pre-processing rule and training procedure are the same as that used in Section SECREF17. We split $9k$ sentences from the “dev08_14” as the validation set and use the remaining ones as the test set.
The results of German$\leftrightarrow $French translation on the test set are summarized in Table TABREF27.
Again, our method achieves significant improvement over the baselines. Specifically, MADL boosts the baseline of German$\rightarrow $French and French$\rightarrow $German by 2 and $1.5$ points respectively.
Our submitted German$\rightarrow $French is a single system trained by MADL, achieving $37.3$ BLEU on WMT19. The French$\rightarrow $German is an ensemble of three independently trained models, achieving $35.0$ BLEU score. Our systems are ranked in the first place for both German$\rightarrow $French and French$\rightarrow $German in the leaderboard.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Dataset
For Chinese$\rightarrow $English translation, we use all the bilingual and monolingual data provided by the WMT official website, and also extra bilingual and monolingual data crawled from the web. We filter the total 24M bilingual pairs from WMT using the script filter_interactive.py as described in Section SECREF17 and get 18M sentence pairs. We use the Chinese monolingual data from XMU monolingual corpus and English monolingual data from News Crawl as well as the English sentences from all English-XX language pairs in WMT. We use 100M additional parallel sentences drawn from UN data, Open Subtitles and Web crawled data, which is filtered using the same filter rule described above, as well as fast align and in/out-domain filter. Finally we get 38M bilingual pairs. We also crawled 80M additional Chinese monolingual sentences from Sougou, China News, Xinhua News, Sina News, Ifeng News, and 2M English monolingual sentences from China News and Reuters. We use newstest2017 and newstest2018 on Chinese-English as development datasets.
We normalize the Chinese sentence from SBC case to DBC case, remove non-printable characters and tokenize with both Jieba and PKUSeg to increase diversity. For English sentences, we remove non-printable characters and tokenize with Moses tokenizer. We follow previous practice BIBREF13 and apply Byte-Pair Encoding (BPE) BIBREF14 separately for Chinese and English, each with 40K vocabulary.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: MASS Pre-training
We pre-train MASS (Transfomer_big) with both monolingual and bilingual data. We use 100M Chinese and 300M English monolingual sentences for the unsupervised setting (Equation DISPLAY_FORM10), and with a total of 18M and 56M bilingual sentence pairs for the supervised settings (Equation DISPLAY_FORM11). We share the encoder and decoder for all the losses in Equation DISPLAY_FORM10 and DISPLAY_FORM11. We then fine-tune the MASS pre-trained model on both 18M and 56M bilingual sentence pairs to get the baseline translation model for both Chinese$\rightarrow $English and English$\rightarrow $Chinese.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Back Translation and Knowledge Distillation
We randomly choose 40M monolingual sentences for Chinese and English respectively for back translation BIBREF11, BIBREF1 and knowledge distillation BIBREF15, BIBREF16. We iterate back translation and knowledge distillation multiple times, to gradually boost the performance of the model.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Results
The results on newstest2017 and newstest2018 are shown in Table TABREF37. We list two baseline Transformer_big systems which use 18M bilingual data (constraint) and 56M bilingual data (unconstraint) respectively. The pre-trained model achieves about 1 BLEU point improvement after fine-tuning on both 18M and 56M bilingual data. After iterative back translation (BT) and knowledge distillation (KD), as well as re-ranking, our system achieves 30.8 and 30.9 BLEU points on newstest2017 and newstest2018 respectively.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: WMT19 Submission
For the WMT19 submission, we conduct fine-tuning and speculation to further boost the accuracy by using the source sentences in the WMT19 test set. We first filter the bilingual as well as pseudo-generated data according to the relevance to the source sentences. We use the filter method in BIBREF17 and continue to train the model on the filtered data. Second, we conduct speculation on the test source sentences following the practice in BIBREF17. The final BLEU score of our submission is 39.3, ranked in the first place in the leaderboard.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Lithuanian
For English$\leftrightarrow $Lithuanian translation, we follow the similar process as that for Chinese$\rightarrow $English task introduced in Section SECREF28. We use all the WMT bilingual data, which is 2.24M after filtration. We use the same English monolingual data as used in Chinese-English. We select 100M Lithuanian monolingual data from official commoncrawl and use all the wiki and news Lithuanian monolingual data provided by WMT. In addition, we crawl 5M Lithuanian news data from LRT website. We share the BPE vocabulary between English and Lithuanian, and the vocabulary size is 65K.
All the bilingual and monolingual data are used for MASS pre-training, and all the bilingual data are used for fine-tuning. For iterative back translation and knowledge distillation, we split 24M English monolingual data as well as 12M Lithuanian monolingual data into 5 parts through sampling with replacement, to get different models independently so as to increase diversity in re-ranking/ensemble. Each model uses 8M English monolingual data and 6M Lithuanian monolingual data. For our WMT19 submission, different from zh-en, speculation technology is not used.
The BLEU scores on newsdev19 are shown in Table TABREF41. Our final submissions for WMT19 achieves 20.1 BLEU points for English$\rightarrow $Lithuanian translation (ranked in the first place) and 35.6 for Lithuanian$\rightarrow $English translation (ranked in the second place).
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Preprocess
We use the official English-Finnish data from WMT19, including both bilingual data and monolingual data. After de-duplicating, the bilingual data contains $8.8M$ aligned sentence pairs. We share the vocabulary for English and Finnish with $46k$ BPE units. We use the WMT17 and WMT18 English-Finnish test sets as two development datasets, and tune hyper-parameters based on the concatenation of them.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Architecture search
We use NAO to search sequence-to-sequence architectures for English-Finnish translation tasks, as introduced in subsection SECREF12. We use PyTorch for our implementations. Due to time limitations, we are not targeting at finding better neural architectures than Transformer; instead we target at models with comparable performance to Transformer, while providing diversity in the reranking process. The whole search process takes $2.5$ days on 16 P40 GPU cards and the discovered neural architecture, named as NAONet, is visualized in the Appendix.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Train single models
The final system for English-Finnish is obtained through reranking of three strong model checkpoints, respectively from the Transformer model decoding from left to right (L2R Transformer), the Transformer model decoding from right to left (R2L Transformer) and NAONet decoding from left to right. All the models have 6-6 layers in encoder/decoder, and are obtained using the same process which is detailed as below.
Step 1: Base models. Train two models $P_1(x|y)$ and $P_1(y|x)$ based on all the bilingual dataset ($8.8$M), respectively for English$\rightarrow $Finnish and Finnish$\rightarrow $English translations.
Step 2: Back translation. Do the normal back translation BIBREF11, BIBREF1 using $P_1$ and $P_2$. Specifically we choose $10M$ monolingual English corpus, use $P_1(y|x)$ to generate the $10M$ pseudo bitext with beam search (beam size is set to 5), and mix it with the bilingual data to continue the training of $P_1(x|y)$. The ratio of mixing is set as $1:1$ through up-sampling. The model obtained through such a process is denoted as $P_2(x|y)$. The same process is applied to the opposite direction and the new model $P_2(y|x)$ is attained.
Step 3: Back translation + knowledge distillation. In this step we generate more pseudo bitext by sequence level knowledge distillation BIBREF15 apart from using back translation. To be more concrete, as the first step, similar to Step 2, we choose $15M$ monolingual English and Finnish corpus, and generate the translations using $P_2(y|x)$ and $P_2(x|y)$, respectively. The resulting pseudo bitext is respectively denoted as $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$. Then we concatenate all the bilingual data, $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$, and use the whole corpus to train a new English-Finnish model from scratch. The attained model is denoted as $P_3(y|x)$.
Step 4: Finetune. In this step we try a very simple data selection method to handle the domain mismatch problem in WMT. We remove all the bilingual corpus from Paracrawl which is generally assumed to be quite noisy BIBREF18 and use the remaining bilingual corpus ($4.5M$) to finetune $P_3(y|x)$ for one epoch. The resulting model is denoted as $P_4(y|x)$ which is set as the final model checkpoint.
To investigate the effects of the four steps, we record the resulting BLEU scores on WMT17 and WMT18 test sets in Table TABREF46, taking the L2R Transformer model as an example. Furthermore, we report the final BLEU scores of the three models after the four steps in Table TABREF47. All the results are obtained via beam size 5 and length penalty $1.0$. The similar results for Finnish-English translation are shown in Table TABREF48.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Re-ranking
We use n-best re-ranking to deliver the final translation results using the three model checkpoints introduced in the last subsection. The beam size is set as 12. The weights of the three models, as well as the length penalty in generation, are tuned on the WMT-18 test sets. The results are shown in the second row of Table TABREF50.
We would also like to investigate what is the influence of the NAONet to the re-ranking results. To achieve that, in re-ranking we replace NAONet with another model from L2R Transformer, trained with the same process in subsection SECREF45 with the difference only in random seeds, while maintain the other two models unchanged. The results are illustrated in the last row of Table TABREF50. From the comparison of the two rows in Table TABREF50, we can see the new architecture NAONet discovered via NAO brings more diversity in the ranking, thus leading to better results. We also report the similar results for Finnish-English tasks in Table TABREF51.
Our systems achieve $27.4$ for and $31.9$ for English$\rightarrow $Finnish and Finnish$\rightarrow $English, ranked in the first place and second place (by teams), respectively.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Dataset
We use the bitext data from the several corpora: ParaCrawl, Common Crawl, News Commentary, Yandex Corpus, and UN Parallel Corpus. We also use News Crawl corpora as monolingual data. The data is filtered by rules such as sentence length, language identification, resulting a training dataset with 16M bilingual pairs and 40M monolingual sentences (20M for English and 20M for Russian). We use WMT17 and WMT18 test set as development data. The two languages use separate vocabularies, each with 50K BPE merge operations.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Our system
Our final system for Russian$\rightarrow $English translation is a combination of Transformer network BIBREF9, back translation BIBREF11, knowledge distillation BIBREF15, soft contextual data augmentation BIBREF5, and model ensemble. We use Transformer_big as network architecture. We first train two models, English$\rightarrow $Russian and Russian$\rightarrow $English respectively, on bilingual pairs as baseline model. Based on these two models, we perform back translation and knowledge distillation on monolingual data, generating 40M synthetic data. Combining both bilingual and synthetic data, we get a large train corpus with 56M pairs in total. We upsample the bilingual pairs and shuffle the combined corpus to ensure the balance between bilingual and synthetic data. Finally, we train the Russian$\rightarrow $English model from scratch. During the training, we also use soft contextual data augmentation to further enhance training. Following the above procedures, 5 different models are trained and ensembled for final submission.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Results
Our final submission achieves 40.1 BLEU score, ranked first in the leaderboard. Table TABREF56 reports the results of our system on the development set.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Dataset
We notice that most of the parallel data are out of domain. Therefore, we crawl some external data:
(1) We crawl all news articles from inform.kz, a Kazakh-English news website. Then we match an English new article to a Kazakh one by matching their images with image hashing. In this way, we find 10K pairs of bilingual news articles. We use their title as additional parallel data. These data are in-domain and useful in training.
(2) We crawl 140K parallel sentence pairs from glosbe.com. Although most of these sentences are out-of-domain, they significantly extended the size of our parallel dataset and lead to better results.
Because most of our parallel training data are noisy, we filter these data with some rules: (1) For the KazakhTV dataset, we remove any sentence pair with an alignment score less than 0.05. (2) For the Wiki Titles dataset, we remove any sentence pair that starts with User or NGC. (3) For all datasets, we remove any sentence pair in which the English sentence contains no lowercase alphabets. (4) For all datasets, we remove any sentence pair where the length ratio is greater than 2.5:1.
We tokenize all our data using the Moses Decoder. We learn a shared BPE BIBREF14 from all our data (including all WMT19 parallel data, WMT19 monolingual data, glosbe, inform.kz news titles, and inform.kz news contents) and get a shared vocabulary of 49,152 tokens. Finally, our dataset consists of 300K bilingual sentence pairs, 700K Kazakh monolingual sentences, and many English monolingual sentences.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Our system
Our model is based on the Transformer BIBREF9. We vary the hyper-parameters to increase the diversity of our model. Our models usually have 6 encoder layers, 6/7 decoder layers, ReLU/GELU BIBREF19 activation function, and an embedding dimension of 640.
We train 4 English-Kazakh models and 4 Kazakh-English models with different random seeds and hyper-parameters. Then we apply back-translation BIBREF12 and knowledge distillation BIBREF15 for 6 rounds. In each round, we
1. Sample 4M sentences from English monolingual data and back-translate them to Kazakh with the best EN-KK model (on the dev set) in the previous round.
2. Back-translate all Kazakh monolingual data to English with the best KK-EN model in the previous round.
3. Sample 200K sentences from English monolingual data and translate them to Kazakh using the ensemble of all EN-KK models in the previous round.
4. Train 4 English-Kazakh models with BT data from step 2 and KD data from step 3. We up-sample bilingual sentence pairs by 2x.
5. Train 4 Kazakh-English models with BT data from step 1. We up-sample bilingual sentence pairs by 3x.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Result
Our final submission achieves 10.6 BLEU score, ranked second by teams in the leaderboard.
Conclusions
This paper describes Microsoft Research Asia's neural machine translation systems for the WMT19 shared news translation tasks. Our systems are built on Transformer, back translation and knowledge distillation, enhanced with our recently proposed techniques: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA). Due to time and GPU limitations, we only apply each technique to a subset of translation tasks. We believe combining them together will further improve the translation accuracy and will conduct experiments in the future. Furthermore, some other techniques such as deliberation learning BIBREF20, adversarial learning BIBREF21, and reinforcement learning BIBREF22, BIBREF23 could also hep and are worthy of exploration.
Acknowledgments
This work is supported by Microsoft Machine Translation team. | MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. |
42af0472e6895eaf7b9392674b0d956e64e86b03 | 42af0472e6895eaf7b9392674b0d956e64e86b03_0 | Q: Which language directions are machine translation systems of WMT evaluated on?
Text: Introduction
We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\leftrightarrow $English, German$\leftrightarrow $French, Chinese$\leftrightarrow $English, English$\rightarrow $Lithuanian, English$\rightarrow $Finnish, and Russian$\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\rightarrow $English, Finnish$\rightarrow $English, and English$\rightarrow $Kazakh.
Our basic systems are based on Transformer, back translation and knowledge distillation. We experimented with several techniques we proposed recently. In brief, the innovations we introduced are:
Introduction ::: Multi-agent dual learning (MADL)
The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\mathcal {X}$ to domain $\mathcal {Y}$) and dual task (mapping from domain $\mathcal {Y}$ to $\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\leftrightarrow $English and German$\leftrightarrow $French translations.
Introduction ::: Masked sequence-to-sequence pretraining (MASS)
Pre-training and fine-tuning have achieved great success in language understanding. MASS BIBREF3, a pre-training method designed for language generation, adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. It was integrated into our submitted systems for Chinese$\rightarrow $English and English$\rightarrow $Lithuanian translations.
Introduction ::: Neural architecture optimization (NAO)
As well known, the evolution of neural network architecture plays a key role in advancing neural machine translation. Neural architecture optimization (NAO), our newly proposed method BIBREF4, leverages the power of a gradient-based method to conduct optimization and guide the creation of better neural architecture in a continuous and more compact space given the historically observed architectures and their performances. It was applied in English$\leftrightarrow $Finnish translations in our submitted systems.
Introduction ::: Soft contextual data augmentation (SCA)
While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is relatively limited. SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary. It was applied in Russian$\rightarrow $English translation in our submitted systems.
Our Techniques ::: Multi-agent dual learning (MADL)
MADL is an enhanced version of dual learning BIBREF1, BIBREF6. It leverages $N$ primal translation models $f_i$ and $N$ dual translation models $g_j$ for training, and eventually outputs one $f_0$ and one $g_0$ for inference, where $f_i:\mathcal {X}\mapsto \mathcal {Y},g_j:\mathcal {Y}\mapsto \mathcal {X}$, $i,j\in \lbrace 0,1,\cdots ,N-1\rbrace $. All these models are pre-trained on bilingual data . The $i$-th primal model $f_i$ has a non-negative weight $\alpha _i$ and the $j$-th dual model $g_i$ has a non-negative weight $\beta _j$. All the $\alpha _\cdot $'s and $\beta _\cdot $'s are hyper-parameters. Let $F_\alpha $ denote a combined translation model from $\mathcal {X}$ to $\mathcal {Y}$, and $G_\beta $ a combined translation model from $\mathcal {Y}$ to $\mathcal {X}$,
$F_\alpha $ and $G_\beta $ work as follows: for any $x\in \mathcal {X}$ and $y\in \mathcal {Y}$,
Let $\mathcal {B}$ denote the bilingual dataset. Let $\mathcal {M}_x$ and $\mathcal {M}_y$ denote the monolingual data of $\mathcal {X}$ and $\mathcal {Y}$. The training objective function of MADL can be written as follows:
Note that $f_{>0}$ and $g_{>0}$ will not be optimized during training and we eventually output $f_0$ and $g_0$ for translation. More details can be found in BIBREF0.
Our Techniques ::: Masked sequence-to-sequence pre-training (MASS)
MASS is a pre-training method for language generation. For machine translation, it can leverage monolingual data in two languages to pre-train a translation model. Given a sentence $x \in \mathcal {X}$, we denote $x^{\setminus u:v}$ as a modified version of $x$ where its fragment from position $u$ to $v$ are masked, $0<u<v<m$ and $m$ is the number of tokens of sentence $x$. We denote $k=v-u+1$ as the number of tokens being masked from position $u$ to $v$. We replace each masked token by a special symbol $[\mathbb {M}]$, and the length of the masked sentence is not changed. $x^{u:v}$ denotes the sentence fragment of $x$ from $u$ to $v$.
MASS pre-trains a sequence to sequence model by predicting the sentence fragment $x^{u:v}$ taking the masked sequence $x^{\setminus u:v}$ as input. We use the log likelihood as the objective function:
where $\mathcal {X}$, $\mathcal {Y}$ denote the source and target domain. In addition to zero/low-resource setting BIBREF7, we also extend MASS to supervised setting where bilingual sentence pair $(x, y) \in (\mathcal {X}, \mathcal {Y})$ can be leveraged for pre-training. The log likelihood in the supervised setting is as follows:
where $[\cdot ;\cdot ]$ represents the concatenation operation. $P(y|x^{\setminus u:v};\theta )$ and $P(x|y^{\setminus u:v};\theta )$ denote the probability of translating a masked sequence to another language, which encourage the encoder to extract meaningful representations of unmasked input tokens in order to predict the masked output sequence. $P(x^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ and $P(y^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ denote the probability of generating the masked source/target segment given both the masked source and target sequences, which encourage the model to extract cross-lingual information. $P(y^{u:v}|x^{\setminus u:v};\theta )$ and $P(x^{u:v}|y^{\setminus u:v};\theta )$ denote the probability of generating the masked fragment given only the masked sequence in another language. More details about MASS can be found in BIBREF3.
Our Techniques ::: Neural architecture optimization (NAO)
NAO BIBREF4 is a gradient based neural architecture search (NAS) method. It contains three key components: an encoder, an accuracy predictor, and a decoder, and optimizes a network architecture as follows. (1) The encoder maps a network architecture $x$ to an embedding vector $e_x$ in a continuous space $\mathcal {E}$. (2) The predictor, a function $f$, takes $e_x\in \mathcal {E}$ as input and predicts the dev set accuracy of the architecture $x$. We perform a gradient ascent step, i.e., moving $e_x$ along the direction specified via the gradient $\frac{\partial f}{\partial e_x}$, and get a new embedding vector $e_{x^{\prime }}$:
where $\eta $ is the step size. (3) The decoder is used to map $e_{x^{\prime }}$ back to the corresponding architecture $x^{\prime }$. The new architecture $x^{\prime }$ is assumed to have better performance compared with the original one $x$ due to the property of gradient ascent. NAO repeats the above three steps, and sequentially generates better and better architectures.
To learn high-quality encoder, decoder and performance prediction function, it is essential to have a large quantity of paired training data in the form of $(x,y)$, where $y$ is the dev set accuracy of the architecture $x$. To reduce computational cost, we share weights among different architectures BIBREF8 to aid the generation of such paired training data.
We use NAO to search powerful neural sequence-to-sequence architectures. The search space is illustrated in Fig. FIGREF13. Specifically, each network is composed of $N$ encoder layers and $N$ decoder layers. We set $N=6$ in our experiments. Each encoder layer further contains 2 nodes and each decoder layer contains 3 nodes. The node has two branches, respectively taking the output of other node as input, and applies a particular operator (OP), for example, identity, self-attention and convolution, to generate the output. The outputs of the two branches are added together as the output of the node. Each encoder layer contains two nodes while each decoder layer has three. For each layer, we search: 1) what is the operator at each branch of every node. For a comprehensive list of different OPs, please refer to the Appendix of this paper; 2) the topology of connection between nodes within each layer. In the middle part of Fig. FIGREF13, we plot possible connections within the nodes of a layer specified by all candidate architectures, with a particular highlight of Transformer BIBREF9.
To construct the final network, we do not adopt the typically used way of stacking the same layer multiple times. Instead we assume that layers in encoder/decoder could have different architectures and directly search such personalized architecture for each layer. We found that such a design significantly improves the performance due to the more flexibility.
Our Techniques ::: Soft contextual data augmentation (SCA)
SCA is a data augmentation technology for NMT BIBREF5, which replaces a randomly chosen word in a sentence with its soft version. For any word $w \in V$, its soft version is a distribution over the vocabulary of $|V|$ words: $P(w) = (p_1(w), p_2(w), ..., p_{|V|}(w))$, where $p_j(w) \ge 0$ and $\sum _{j=1}^{|V|}p_j(w) = 1$.
Given the distribution $P(w)$, one may simply sample a word from this distribution to replace the original word $w$. Different from this method, we directly use this distribution vector to replace the randomly chosen word $w$ from the original sentence. Suppose $E$ is the embedding matrix of all the $|V|$ words. The embedding of the soft version of $w$ is
which is the expectation of word embeddings over the distribution.
In our systems, we leverage a pre-trained language model to compute $P(w)$ and condition on all the words preceding $w$. That is, for the $t$-th word $x_t$ in a sentence, we have
where $LM(v_j|x_{<t})$ denotes the probability of the $j$-th word $v_j$ in the vocabulary appearing after the sequence $x_1, x_2, \cdots , x_{t-1}$. The language model is pre-trained using the monolingual data.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German
We submit constrained systems to both English to German and German to English translations, with the same techniques.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Dataset
We concatenate “Europarl v9”, “News Commentary v14”, “Common Crawl corpus” and “Document-split Rapid corpus” as the basic bilingual dataset (denoted as $\mathcal {B}_0$). Since “Paracrawl” data is noisy, we select 20M bilingual data from this corpus using the script filter_interactive.py. The two parts of bilingual data are concatenated together (denoted as $\mathcal {B}_1$). We clean $\mathcal {B}_1$ by normalizing the sentences, removing non-printable characters, and tokenization. We share a vocabulary for the two languages and apply BPE for word segmentation with 35000 merge operations. (We tried different BPE merge operations but found no significant differences.) For monolingual data, we use $120M$ English sentences (denoted as $\mathcal {M}_{\text{en}}$) and $120M$ German sentences (denoted as $\mathcal {M}_{\text{de}}$) from Newscrawl, and preprocess them in the same way as bilingual data. We use newstest 2016 and the validation set and newstest 2018 as the test set.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Model Configuration
We use the PyTorch implementation of Transformer. We choose the Transformer_big setting, in which both the encoder and decoder are of six layers. The dropout rate is fixed as $0.2$. We set the batchsize as 4096 and the parameter –update-freq as 16. We apply Adam BIBREF10 optimizer with learning rate $5\times 10^{-4}$.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Training Pipeline
The pipeline consists of three steps:
1. Pre-train two English$\rightarrow $German translation models (denoted as $\bar{f}_1$ and $\bar{f}_2$) and two German$\rightarrow $English translation models (denoted as $\bar{g}_1$ and $\bar{g}_2$) on $\mathcal {B}_1$; pre-train another English$\rightarrow $German (denoted as $\bar{f}_3$) and German$\rightarrow $English (denoted as $\bar{g}_3$) on $\mathcal {B}_0$.
2. Apply back translation following BIBREF11, BIBREF12. We back-translate $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ using $\bar{f}_3$ and $\bar{g}_3$ with beam search, add noise to the translated sentences BIBREF12, merge the synthetic data with $\mathcal {B}_1$, and train one English$\rightarrow $German model $f_0$ and one German$\rightarrow $English model $g_0$ for seven days on eight V100 GPUs.
3. Apply MADL to $f_0$ and $g_0$. That is, the $F_\alpha $ in Eqn.(DISPLAY_FORM8) is specified as the combination of $f_0,\bar{f}_1,\bar{f}_2$ with equal weights; and $G_\beta $ consists of $g_0,\bar{g}_1,\bar{g}_2$. During training, we will only update $f_0$ and $g_0$. To speed up training, we randomly select $20M$ monolingual English and German sentences from $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ respectively instead of using all monolingual sentences. The eventual output models are denoted as $f_1$ and $g_1$ respectively. This step takes 3 days on four P40 GPUs.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@German ::: Results
The results are summarized in Table TABREF24, which are evaluated by sacreBLEU. The baseline is the average accuracy of models using only bitext, i.e., $\bar{f}_1$ and $\bar{f}_2$ for English$\rightarrow $German translation and $\bar{g}_1$ and $\bar{g}_2$ for German$\rightarrow $English, and BT is the accuracy of the model after back-translation training. As can be seen, back translation improves accuracy. For example, back-translation boosts the BLEU score from $45.6$ to $47.4$ on news18 English$\rightarrow $German translation, which is $1.8$ point improvement. MADL further boosts BLEU to $50.4$, obtaining another 3-point improvement, demonstrating the effectiveness of our method.
For the final submission, we accumulate many translation models (trained using bitext, back translation, and MADL, with different random seeds) and do knowledge distillation on the source sentences from WMT14 to WMT19 test sets. Take English$\rightarrow $German translation as an example. Denote the English inputs as $\mathcal {T}=\lbrace s_i\rbrace _{i=1}^{N_T}$, where $N_T$ is the size of the test set. For each $s$ in $\mathcal {T}$, we translate $s$ to $d^\prime $ using $M$ English$\rightarrow $German models and eventually obtain
where $f^{(j)}$ is the $j$-th translation model we accumulated, $\mathcal {T}$ is the combination of inputs from WMT14 to WMT19. After obtaining $\mathcal {E}$, we randomly select $N_TM$ bitext pairs (denoted as $\mathcal {B}_2$) from $\mathcal {B}_1$ and finetune model $f_1$ on $\mathcal {B}_2\cup \mathcal {E}$. We stop tuning when the BLEU scores of WMT16 (i.e., the validation set) drops.
We eventually obtain $44.9$ BLEU score for English$\rightarrow $German and $42.8$ for German$\rightarrow $English on WMT19 test sets and are ranked in the first place in these two translation tasks.
Submitted Systems ::: German@!START@$\leftrightarrow $@!END@French
For German$\leftrightarrow $French translation, we follow a similar process as the one used to English$\leftrightarrow $German tasks introduced in Section SECREF17. We merge the “commoncrawl”, “europarl-v7” and part of “de-fr.bicleaner07” selected by filter_interactive.py as the bilingual data. We collect $20M$ monolingual sentences for French and $20M$ for German from newscrawl. The data pre-processing rule and training procedure are the same as that used in Section SECREF17. We split $9k$ sentences from the “dev08_14” as the validation set and use the remaining ones as the test set.
The results of German$\leftrightarrow $French translation on the test set are summarized in Table TABREF27.
Again, our method achieves significant improvement over the baselines. Specifically, MADL boosts the baseline of German$\rightarrow $French and French$\rightarrow $German by 2 and $1.5$ points respectively.
Our submitted German$\rightarrow $French is a single system trained by MADL, achieving $37.3$ BLEU on WMT19. The French$\rightarrow $German is an ensemble of three independently trained models, achieving $35.0$ BLEU score. Our systems are ranked in the first place for both German$\rightarrow $French and French$\rightarrow $German in the leaderboard.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Dataset
For Chinese$\rightarrow $English translation, we use all the bilingual and monolingual data provided by the WMT official website, and also extra bilingual and monolingual data crawled from the web. We filter the total 24M bilingual pairs from WMT using the script filter_interactive.py as described in Section SECREF17 and get 18M sentence pairs. We use the Chinese monolingual data from XMU monolingual corpus and English monolingual data from News Crawl as well as the English sentences from all English-XX language pairs in WMT. We use 100M additional parallel sentences drawn from UN data, Open Subtitles and Web crawled data, which is filtered using the same filter rule described above, as well as fast align and in/out-domain filter. Finally we get 38M bilingual pairs. We also crawled 80M additional Chinese monolingual sentences from Sougou, China News, Xinhua News, Sina News, Ifeng News, and 2M English monolingual sentences from China News and Reuters. We use newstest2017 and newstest2018 on Chinese-English as development datasets.
We normalize the Chinese sentence from SBC case to DBC case, remove non-printable characters and tokenize with both Jieba and PKUSeg to increase diversity. For English sentences, we remove non-printable characters and tokenize with Moses tokenizer. We follow previous practice BIBREF13 and apply Byte-Pair Encoding (BPE) BIBREF14 separately for Chinese and English, each with 40K vocabulary.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: MASS Pre-training
We pre-train MASS (Transfomer_big) with both monolingual and bilingual data. We use 100M Chinese and 300M English monolingual sentences for the unsupervised setting (Equation DISPLAY_FORM10), and with a total of 18M and 56M bilingual sentence pairs for the supervised settings (Equation DISPLAY_FORM11). We share the encoder and decoder for all the losses in Equation DISPLAY_FORM10 and DISPLAY_FORM11. We then fine-tune the MASS pre-trained model on both 18M and 56M bilingual sentence pairs to get the baseline translation model for both Chinese$\rightarrow $English and English$\rightarrow $Chinese.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Back Translation and Knowledge Distillation
We randomly choose 40M monolingual sentences for Chinese and English respectively for back translation BIBREF11, BIBREF1 and knowledge distillation BIBREF15, BIBREF16. We iterate back translation and knowledge distillation multiple times, to gradually boost the performance of the model.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: Results
The results on newstest2017 and newstest2018 are shown in Table TABREF37. We list two baseline Transformer_big systems which use 18M bilingual data (constraint) and 56M bilingual data (unconstraint) respectively. The pre-trained model achieves about 1 BLEU point improvement after fine-tuning on both 18M and 56M bilingual data. After iterative back translation (BT) and knowledge distillation (KD), as well as re-ranking, our system achieves 30.8 and 30.9 BLEU points on newstest2017 and newstest2018 respectively.
Submitted Systems ::: Chinese@!START@$\rightarrow $@!END@English ::: WMT19 Submission
For the WMT19 submission, we conduct fine-tuning and speculation to further boost the accuracy by using the source sentences in the WMT19 test set. We first filter the bilingual as well as pseudo-generated data according to the relevance to the source sentences. We use the filter method in BIBREF17 and continue to train the model on the filtered data. Second, we conduct speculation on the test source sentences following the practice in BIBREF17. The final BLEU score of our submission is 39.3, ranked in the first place in the leaderboard.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Lithuanian
For English$\leftrightarrow $Lithuanian translation, we follow the similar process as that for Chinese$\rightarrow $English task introduced in Section SECREF28. We use all the WMT bilingual data, which is 2.24M after filtration. We use the same English monolingual data as used in Chinese-English. We select 100M Lithuanian monolingual data from official commoncrawl and use all the wiki and news Lithuanian monolingual data provided by WMT. In addition, we crawl 5M Lithuanian news data from LRT website. We share the BPE vocabulary between English and Lithuanian, and the vocabulary size is 65K.
All the bilingual and monolingual data are used for MASS pre-training, and all the bilingual data are used for fine-tuning. For iterative back translation and knowledge distillation, we split 24M English monolingual data as well as 12M Lithuanian monolingual data into 5 parts through sampling with replacement, to get different models independently so as to increase diversity in re-ranking/ensemble. Each model uses 8M English monolingual data and 6M Lithuanian monolingual data. For our WMT19 submission, different from zh-en, speculation technology is not used.
The BLEU scores on newsdev19 are shown in Table TABREF41. Our final submissions for WMT19 achieves 20.1 BLEU points for English$\rightarrow $Lithuanian translation (ranked in the first place) and 35.6 for Lithuanian$\rightarrow $English translation (ranked in the second place).
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Preprocess
We use the official English-Finnish data from WMT19, including both bilingual data and monolingual data. After de-duplicating, the bilingual data contains $8.8M$ aligned sentence pairs. We share the vocabulary for English and Finnish with $46k$ BPE units. We use the WMT17 and WMT18 English-Finnish test sets as two development datasets, and tune hyper-parameters based on the concatenation of them.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Architecture search
We use NAO to search sequence-to-sequence architectures for English-Finnish translation tasks, as introduced in subsection SECREF12. We use PyTorch for our implementations. Due to time limitations, we are not targeting at finding better neural architectures than Transformer; instead we target at models with comparable performance to Transformer, while providing diversity in the reranking process. The whole search process takes $2.5$ days on 16 P40 GPU cards and the discovered neural architecture, named as NAONet, is visualized in the Appendix.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Train single models
The final system for English-Finnish is obtained through reranking of three strong model checkpoints, respectively from the Transformer model decoding from left to right (L2R Transformer), the Transformer model decoding from right to left (R2L Transformer) and NAONet decoding from left to right. All the models have 6-6 layers in encoder/decoder, and are obtained using the same process which is detailed as below.
Step 1: Base models. Train two models $P_1(x|y)$ and $P_1(y|x)$ based on all the bilingual dataset ($8.8$M), respectively for English$\rightarrow $Finnish and Finnish$\rightarrow $English translations.
Step 2: Back translation. Do the normal back translation BIBREF11, BIBREF1 using $P_1$ and $P_2$. Specifically we choose $10M$ monolingual English corpus, use $P_1(y|x)$ to generate the $10M$ pseudo bitext with beam search (beam size is set to 5), and mix it with the bilingual data to continue the training of $P_1(x|y)$. The ratio of mixing is set as $1:1$ through up-sampling. The model obtained through such a process is denoted as $P_2(x|y)$. The same process is applied to the opposite direction and the new model $P_2(y|x)$ is attained.
Step 3: Back translation + knowledge distillation. In this step we generate more pseudo bitext by sequence level knowledge distillation BIBREF15 apart from using back translation. To be more concrete, as the first step, similar to Step 2, we choose $15M$ monolingual English and Finnish corpus, and generate the translations using $P_2(y|x)$ and $P_2(x|y)$, respectively. The resulting pseudo bitext is respectively denoted as $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$. Then we concatenate all the bilingual data, $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$, and use the whole corpus to train a new English-Finnish model from scratch. The attained model is denoted as $P_3(y|x)$.
Step 4: Finetune. In this step we try a very simple data selection method to handle the domain mismatch problem in WMT. We remove all the bilingual corpus from Paracrawl which is generally assumed to be quite noisy BIBREF18 and use the remaining bilingual corpus ($4.5M$) to finetune $P_3(y|x)$ for one epoch. The resulting model is denoted as $P_4(y|x)$ which is set as the final model checkpoint.
To investigate the effects of the four steps, we record the resulting BLEU scores on WMT17 and WMT18 test sets in Table TABREF46, taking the L2R Transformer model as an example. Furthermore, we report the final BLEU scores of the three models after the four steps in Table TABREF47. All the results are obtained via beam size 5 and length penalty $1.0$. The similar results for Finnish-English translation are shown in Table TABREF48.
Submitted Systems ::: English@!START@$\leftrightarrow $@!END@Finnish ::: Re-ranking
We use n-best re-ranking to deliver the final translation results using the three model checkpoints introduced in the last subsection. The beam size is set as 12. The weights of the three models, as well as the length penalty in generation, are tuned on the WMT-18 test sets. The results are shown in the second row of Table TABREF50.
We would also like to investigate what is the influence of the NAONet to the re-ranking results. To achieve that, in re-ranking we replace NAONet with another model from L2R Transformer, trained with the same process in subsection SECREF45 with the difference only in random seeds, while maintain the other two models unchanged. The results are illustrated in the last row of Table TABREF50. From the comparison of the two rows in Table TABREF50, we can see the new architecture NAONet discovered via NAO brings more diversity in the ranking, thus leading to better results. We also report the similar results for Finnish-English tasks in Table TABREF51.
Our systems achieve $27.4$ for and $31.9$ for English$\rightarrow $Finnish and Finnish$\rightarrow $English, ranked in the first place and second place (by teams), respectively.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Dataset
We use the bitext data from the several corpora: ParaCrawl, Common Crawl, News Commentary, Yandex Corpus, and UN Parallel Corpus. We also use News Crawl corpora as monolingual data. The data is filtered by rules such as sentence length, language identification, resulting a training dataset with 16M bilingual pairs and 40M monolingual sentences (20M for English and 20M for Russian). We use WMT17 and WMT18 test set as development data. The two languages use separate vocabularies, each with 50K BPE merge operations.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Our system
Our final system for Russian$\rightarrow $English translation is a combination of Transformer network BIBREF9, back translation BIBREF11, knowledge distillation BIBREF15, soft contextual data augmentation BIBREF5, and model ensemble. We use Transformer_big as network architecture. We first train two models, English$\rightarrow $Russian and Russian$\rightarrow $English respectively, on bilingual pairs as baseline model. Based on these two models, we perform back translation and knowledge distillation on monolingual data, generating 40M synthetic data. Combining both bilingual and synthetic data, we get a large train corpus with 56M pairs in total. We upsample the bilingual pairs and shuffle the combined corpus to ensure the balance between bilingual and synthetic data. Finally, we train the Russian$\rightarrow $English model from scratch. During the training, we also use soft contextual data augmentation to further enhance training. Following the above procedures, 5 different models are trained and ensembled for final submission.
Submitted Systems ::: Russian@!START@$\rightarrow $@!END@English ::: Results
Our final submission achieves 40.1 BLEU score, ranked first in the leaderboard. Table TABREF56 reports the results of our system on the development set.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Dataset
We notice that most of the parallel data are out of domain. Therefore, we crawl some external data:
(1) We crawl all news articles from inform.kz, a Kazakh-English news website. Then we match an English new article to a Kazakh one by matching their images with image hashing. In this way, we find 10K pairs of bilingual news articles. We use their title as additional parallel data. These data are in-domain and useful in training.
(2) We crawl 140K parallel sentence pairs from glosbe.com. Although most of these sentences are out-of-domain, they significantly extended the size of our parallel dataset and lead to better results.
Because most of our parallel training data are noisy, we filter these data with some rules: (1) For the KazakhTV dataset, we remove any sentence pair with an alignment score less than 0.05. (2) For the Wiki Titles dataset, we remove any sentence pair that starts with User or NGC. (3) For all datasets, we remove any sentence pair in which the English sentence contains no lowercase alphabets. (4) For all datasets, we remove any sentence pair where the length ratio is greater than 2.5:1.
We tokenize all our data using the Moses Decoder. We learn a shared BPE BIBREF14 from all our data (including all WMT19 parallel data, WMT19 monolingual data, glosbe, inform.kz news titles, and inform.kz news contents) and get a shared vocabulary of 49,152 tokens. Finally, our dataset consists of 300K bilingual sentence pairs, 700K Kazakh monolingual sentences, and many English monolingual sentences.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Our system
Our model is based on the Transformer BIBREF9. We vary the hyper-parameters to increase the diversity of our model. Our models usually have 6 encoder layers, 6/7 decoder layers, ReLU/GELU BIBREF19 activation function, and an embedding dimension of 640.
We train 4 English-Kazakh models and 4 Kazakh-English models with different random seeds and hyper-parameters. Then we apply back-translation BIBREF12 and knowledge distillation BIBREF15 for 6 rounds. In each round, we
1. Sample 4M sentences from English monolingual data and back-translate them to Kazakh with the best EN-KK model (on the dev set) in the previous round.
2. Back-translate all Kazakh monolingual data to English with the best KK-EN model in the previous round.
3. Sample 200K sentences from English monolingual data and translate them to Kazakh using the ensemble of all EN-KK models in the previous round.
4. Train 4 English-Kazakh models with BT data from step 2 and KD data from step 3. We up-sample bilingual sentence pairs by 2x.
5. Train 4 Kazakh-English models with BT data from step 1. We up-sample bilingual sentence pairs by 3x.
Submitted Systems ::: English@!START@$\rightarrow $@!END@Kazakh ::: Result
Our final submission achieves 10.6 BLEU score, ranked second by teams in the leaderboard.
Conclusions
This paper describes Microsoft Research Asia's neural machine translation systems for the WMT19 shared news translation tasks. Our systems are built on Transformer, back translation and knowledge distillation, enhanced with our recently proposed techniques: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA). Due to time and GPU limitations, we only apply each technique to a subset of translation tasks. We believe combining them together will further improve the translation accuracy and will conduct experiments in the future. Furthermore, some other techniques such as deliberation learning BIBREF20, adversarial learning BIBREF21, and reinforcement learning BIBREF22, BIBREF23 could also hep and are worthy of exploration.
Acknowledgments
This work is supported by Microsoft Machine Translation team. | German$\leftrightarrow $English, German$\leftrightarrow $French, Chinese$\leftrightarrow $English, English$\rightarrow $Lithuanian, English$\rightarrow $Finnish, and Russian$\rightarrow $English, Lithuanian$\rightarrow $English, Finnish$\rightarrow $English, and English$\rightarrow $Kazakh |
a85698f19a91ecd3cd3a90a93a453d2acebae1b7 | a85698f19a91ecd3cd3a90a93a453d2acebae1b7_0 | Q: Approximately how much computational cost is saved by using this model?
Text: Conditional Computation
Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , images BIBREF4 , BIBREF5 , and audio BIBREF6 , BIBREF7 . For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.
Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.
While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges:
Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.
Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.
Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.
Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. BIBREF13 use three such terms. These issues can affect both model quality and load-balancing.
Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters.
In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets.
Our Approach: The Sparsely-Gated Mixture-of-Experts Layer
Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure FIGREF8 ). All parts of the network are trained jointly by back-propagation.
While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers BIBREF15 , as in Figure FIGREF8 . The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix SECREF84 Table TABREF92 ). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost.
Related work on Mixtures of Experts
Since its introduction more than two decades ago BIBREF16 , BIBREF17 , the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs BIBREF18 , Gaussian Processes BIBREF19 , BIBREF20 , BIBREF21 , Dirichlet Processes BIBREF22 , and deep networks. Other work has focused on different expert configurations such as a hierarchical structure BIBREF23 , infinite numbers of experts BIBREF24 , and adding experts sequentially BIBREF25 . BIBREF26 suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model.
The works above concern top-level mixtures of experts. The mixture of experts is the whole model. BIBREF10 introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.
Our work builds on this use of MoEs as a general purpose neural network component. While BIBREF10 uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity.
The Structure of the Mixture-of-Experts layer
The Mixture-of-Experts (MoE) layer consists of a set of INLINEFORM0 “expert networks" INLINEFORM1 , and a “gating network" INLINEFORM2 whose output is a sparse INLINEFORM3 -dimensional vector. Figure FIGREF8 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.
Let us denote by INLINEFORM0 and INLINEFORM1 the output of the gating network and the output of the INLINEFORM2 -th expert network for a given input INLINEFORM3 . The output INLINEFORM4 of the MoE module can be written as follows: DISPLAYFORM0
We save computation based on the sparsity of the output of INLINEFORM0 . Wherever INLINEFORM1 , we need not compute INLINEFORM2 . In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix SECREF60 .
Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in BIBREF12 . A MoE whose experts have one hidden layer is similar to the block-wise dropout described in BIBREF13 , where the dropped-out layer is sandwiched between fully-activated layers.
Gating Network
A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0
We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1
We train the gating network by simple back-propagation, along with the rest of the model. If we choose INLINEFORM0 , the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in BIBREF9 with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from BIBREF13 who use boolean gates and a REINFORCE-style approach to train the gating network.
The Shrinking Batch Problem
On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses INLINEFORM0 out of INLINEFORM1 experts for each example, then for a batch of INLINEFORM2 examples, each expert receives a much smaller batch of approximately INLINEFORM3 examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size:
In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over INLINEFORM0 devices, and each device processes a batch of size INLINEFORM1 , each expert receives a batch of approximately INLINEFORM2 examples. Thus, we achieve a factor of INLINEFORM3 improvement in expert batch size.
In the case of a hierarchical MoE (Section SECREF60 ), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.
This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.
In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.
We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. BIBREF27 describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size.
Network Bandwidth
Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes INLINEFORM0 _ INLINEFORM1 _ INLINEFORM2 and INLINEFORM3 _ INLINEFORM4 _ INLINEFORM5 , the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers.
Balancing Expert Utilization
We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. BIBREF10 describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. BIBREF13 include a soft constraint on the batch-wise average of each gate.
We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss INLINEFORM0 , which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor INLINEFORM1 . This additional loss encourages all experts to have equal importance. DISPLAYFORM0 DISPLAYFORM1
While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, INLINEFORM0 , which ensures balanced loads. Appendix SECREF51 contains the definition of this function, along with experimental results.
1 Billion Word Language Modeling Benchmark
This dataset, introduced by BIBREF28 consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.
The best previously published results BIBREF2 use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers BIBREF15 , BIBREF29 . The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure FIGREF32 -right.
Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure FIGREF8 ). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix SECREF65 .
To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input.
The results of these models are shown in Figure FIGREF32 -left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.
In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
We trained our models using TensorFlow BIBREF30 on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total.
For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix SECREF65 , Table TABREF76 .
100 Billion Word Google News Corpus
On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure FIGREF32 -left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements.
We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix SECREF78 .
Figure FIGREF37 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets.
Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU.
Machine Translation (Single Language Pair)
Our model was a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix SECREF84 .
We benchmarked our method on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in BIBREF3 : newstest2014 was used as the test set to compare against previous work BIBREF31 , BIBREF32 , BIBREF3 , while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data.
Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
Multilingual Machine Translation
BIBREF35 train a single GNMT BIBREF3 model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix SECREF84 for details on model architecture. We train our model on the same dataset as BIBREF35 and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.
Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table TABREF50 . The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English INLINEFORM0 Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.
Conclusion
This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come.
Appendices
tocsectionAppendices
Load-Balancing Loss
As discussed in section SECREF4 , for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator INLINEFORM0 of the number of examples assigned to each expert for a batch INLINEFORM1 of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define INLINEFORM2 as the probability that INLINEFORM3 is nonzero, given a new random choice of noise on element INLINEFORM4 , but keeping the already-sampled choices of noise on the other elements. To compute INLINEFORM5 , we note that the INLINEFORM6 is nonzero if and only if INLINEFORM7 is greater than the INLINEFORM8 -greatest element of INLINEFORM9 excluding itself. The probability works out to be: DISPLAYFORM0
Where INLINEFORM0 means the kth highest component of INLINEFORM1 , excluding component INLINEFORM2 . Simplifying, we get: DISPLAYFORM0
Where INLINEFORM0 is the CDF of the standard normal distribution. DISPLAYFORM0
We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor INLINEFORM0 . DISPLAYFORM0
To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices INLINEFORM0 and INLINEFORM1 to all zeros, which yields no signal and some noise.
We trained a set of models with identical architecture (the MoE-256 model described in Appendix SECREF65 ), using different values of INLINEFORM0 and INLINEFORM1 . We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in INLINEFORM2 and INLINEFORM3 , as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.
Results are reported in Table TABREF58 . All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of INLINEFORM0 had lower loads on the most overloaded expert.
Hierachical Mixture of Experts
If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. If the hierarchical MoE consists of INLINEFORM0 groups of INLINEFORM1 experts each, we denote the primary gating network by INLINEFORM2 , the secondary gating networks by INLINEFORM3 , and the expert networks by INLINEFORM4 . The output of the MoE is given by: DISPLAYFORM0
Our metrics of expert utilization change to the following: DISPLAYFORM0 DISPLAYFORM1
INLINEFORM0 and INLINEFORM1 deonte the INLINEFORM2 functions for the primary gating network and INLINEFORM3 secondary gating network respectively. INLINEFORM4 denotes the subset of INLINEFORM5 for which INLINEFORM6 .
It would seem simpler to let INLINEFORM0 , but this would not have a gradient with respect to the primary gating network, so we use the formulation above.
1 Billion Word Language Modeling Benchmark - Experimental Details
Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer BIBREF15 , BIBREF29 , a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput BIBREF43 to the layer output, dropping each activation with probability INLINEFORM0 , otherwise dividing by INLINEFORM1 . After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow BIBREF37 .
Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.
The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity:
MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096.
MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024.
4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.
LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions BIBREF41 . The next timestep of the LSTM receives the projected output. This is identical to one of the models published in BIBREF2 . We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.
The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section SECREF3 . Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in BIBREF2 . For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluate our model using perplexity on the holdout dataset, used by BIBREF28 , BIBREF2 . We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table TABREF76 . For each model, we report the test perplexity, the computational budget, the parameter counts, the value of INLINEFORM0 , and the computational efficiency.
We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 BIBREF41 . MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best INLINEFORM0 for each model, and trained each model for 10 epochs.
The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .
100 Billion Word Google News Corpus - Experimental Details
The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively.
Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words.
We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:
The Adam optimizer BIBREF39 keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set INLINEFORM0 . To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad BIBREF36 .
We evaluate our model using perplexity on a holdout dataset. Results are reported in Table TABREF81 . Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing BIBREF40 .
Machine Translation - Experimental Details
Our model is a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention . All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow BIBREF37 . Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as “wordpieces") BIBREF42 for inputs and outputs in our system.
We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in BIBREF3 .
We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use INLINEFORM0 and the hierarchical MoE models use INLINEFORM1 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains INLINEFORM2 parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix SECREF93 .
We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section UID14 , not the scheme from Appendix SECREF93 . The MoE layers in the encoder and decoder are non-hierarchical MoEs with INLINEFORM0 experts, and INLINEFORM1 . Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep.
We trained our networks using the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to BIBREF3 , we applied dropout BIBREF43 to the output of all embedding, LSTM and MoE layers, using INLINEFORM0 . Training was done synchronously on a cluster of up to 64 GPUs as described in section SECREF3 . Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in BIBREF31 .
Tables TABREF42 , TABREF43 and TABREF44 in Section SECREF39 show comparisons of our results to other published methods. Figure FIGREF91 shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.
We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table TABREF92 . For example, one expert is used when the indefinite article “a" introduces the direct object in a verb phrase indicating importance or leadership.
Strictly Balanced Gating
Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below.
Recall that we define the softmax gating function to be: DISPLAYFORM0
To obtain a sparse gating vector, we multiply INLINEFORM0 component-wise with a sparse mask INLINEFORM1 and normalize the output. The mask itself is a function of INLINEFORM2 and specifies which experts are assigned to each input example: DISPLAYFORM0
To implement top-k gating in this formulation, we would let INLINEFORM0 , where: DISPLAYFORM0
To force each expert to receive the exact same number of examples, we introduce an alternative mask function, INLINEFORM0 , which operates over batches of input vectors. Instead of keeping the top INLINEFORM1 values per example, we keep the top INLINEFORM2 values per expert across the training batch, where INLINEFORM3 , so that each example is sent to an average of INLINEFORM4 experts. DISPLAYFORM0
As our experiments suggest and also observed in BIBREF38 , using a batchwise function during training (such as INLINEFORM0 ) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector INLINEFORM1 of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: DISPLAYFORM0
To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. DISPLAYFORM0
Attention Function
The attention mechanism described in GNMT BIBREF3 involves a learned “Attention Function" INLINEFORM0 which takes a “source vector" INLINEFORM1 and a “target vector" INLINEFORM2 , and must be computed for every source time step INLINEFORM3 and target time step INLINEFORM4 . In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size INLINEFORM5 . It can be expressed as: DISPLAYFORM0
Where INLINEFORM0 and INLINEFORM1 are trainable weight matrices and INLINEFORM2 is a trainable weight vector.
For performance reasons, in our models, we used a slightly different attention function: DISPLAYFORM0
With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. | Unanswerable |
af073d84b8a7c968e5822c79bef34a28655886de | af073d84b8a7c968e5822c79bef34a28655886de_0 | Q: What improvement does the MOE model make over the SOTA on machine translation?
Text: Conditional Computation
Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , images BIBREF4 , BIBREF5 , and audio BIBREF6 , BIBREF7 . For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.
Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.
While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges:
Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.
Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.
Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.
Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. BIBREF13 use three such terms. These issues can affect both model quality and load-balancing.
Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters.
In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets.
Our Approach: The Sparsely-Gated Mixture-of-Experts Layer
Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure FIGREF8 ). All parts of the network are trained jointly by back-propagation.
While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers BIBREF15 , as in Figure FIGREF8 . The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix SECREF84 Table TABREF92 ). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost.
Related work on Mixtures of Experts
Since its introduction more than two decades ago BIBREF16 , BIBREF17 , the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs BIBREF18 , Gaussian Processes BIBREF19 , BIBREF20 , BIBREF21 , Dirichlet Processes BIBREF22 , and deep networks. Other work has focused on different expert configurations such as a hierarchical structure BIBREF23 , infinite numbers of experts BIBREF24 , and adding experts sequentially BIBREF25 . BIBREF26 suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model.
The works above concern top-level mixtures of experts. The mixture of experts is the whole model. BIBREF10 introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.
Our work builds on this use of MoEs as a general purpose neural network component. While BIBREF10 uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity.
The Structure of the Mixture-of-Experts layer
The Mixture-of-Experts (MoE) layer consists of a set of INLINEFORM0 “expert networks" INLINEFORM1 , and a “gating network" INLINEFORM2 whose output is a sparse INLINEFORM3 -dimensional vector. Figure FIGREF8 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.
Let us denote by INLINEFORM0 and INLINEFORM1 the output of the gating network and the output of the INLINEFORM2 -th expert network for a given input INLINEFORM3 . The output INLINEFORM4 of the MoE module can be written as follows: DISPLAYFORM0
We save computation based on the sparsity of the output of INLINEFORM0 . Wherever INLINEFORM1 , we need not compute INLINEFORM2 . In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix SECREF60 .
Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in BIBREF12 . A MoE whose experts have one hidden layer is similar to the block-wise dropout described in BIBREF13 , where the dropped-out layer is sandwiched between fully-activated layers.
Gating Network
A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0
We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1
We train the gating network by simple back-propagation, along with the rest of the model. If we choose INLINEFORM0 , the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in BIBREF9 with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from BIBREF13 who use boolean gates and a REINFORCE-style approach to train the gating network.
The Shrinking Batch Problem
On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses INLINEFORM0 out of INLINEFORM1 experts for each example, then for a batch of INLINEFORM2 examples, each expert receives a much smaller batch of approximately INLINEFORM3 examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size:
In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over INLINEFORM0 devices, and each device processes a batch of size INLINEFORM1 , each expert receives a batch of approximately INLINEFORM2 examples. Thus, we achieve a factor of INLINEFORM3 improvement in expert batch size.
In the case of a hierarchical MoE (Section SECREF60 ), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.
This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.
In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.
We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. BIBREF27 describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size.
Network Bandwidth
Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes INLINEFORM0 _ INLINEFORM1 _ INLINEFORM2 and INLINEFORM3 _ INLINEFORM4 _ INLINEFORM5 , the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers.
Balancing Expert Utilization
We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. BIBREF10 describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. BIBREF13 include a soft constraint on the batch-wise average of each gate.
We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss INLINEFORM0 , which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor INLINEFORM1 . This additional loss encourages all experts to have equal importance. DISPLAYFORM0 DISPLAYFORM1
While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, INLINEFORM0 , which ensures balanced loads. Appendix SECREF51 contains the definition of this function, along with experimental results.
1 Billion Word Language Modeling Benchmark
This dataset, introduced by BIBREF28 consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.
The best previously published results BIBREF2 use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers BIBREF15 , BIBREF29 . The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure FIGREF32 -right.
Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure FIGREF8 ). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix SECREF65 .
To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input.
The results of these models are shown in Figure FIGREF32 -left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.
In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
We trained our models using TensorFlow BIBREF30 on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total.
For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix SECREF65 , Table TABREF76 .
100 Billion Word Google News Corpus
On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure FIGREF32 -left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements.
We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix SECREF78 .
Figure FIGREF37 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets.
Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU.
Machine Translation (Single Language Pair)
Our model was a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix SECREF84 .
We benchmarked our method on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in BIBREF3 : newstest2014 was used as the test set to compare against previous work BIBREF31 , BIBREF32 , BIBREF3 , while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data.
Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
Multilingual Machine Translation
BIBREF35 train a single GNMT BIBREF3 model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix SECREF84 for details on model architecture. We train our model on the same dataset as BIBREF35 and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.
Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table TABREF50 . The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English INLINEFORM0 Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.
Conclusion
This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come.
Appendices
tocsectionAppendices
Load-Balancing Loss
As discussed in section SECREF4 , for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator INLINEFORM0 of the number of examples assigned to each expert for a batch INLINEFORM1 of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define INLINEFORM2 as the probability that INLINEFORM3 is nonzero, given a new random choice of noise on element INLINEFORM4 , but keeping the already-sampled choices of noise on the other elements. To compute INLINEFORM5 , we note that the INLINEFORM6 is nonzero if and only if INLINEFORM7 is greater than the INLINEFORM8 -greatest element of INLINEFORM9 excluding itself. The probability works out to be: DISPLAYFORM0
Where INLINEFORM0 means the kth highest component of INLINEFORM1 , excluding component INLINEFORM2 . Simplifying, we get: DISPLAYFORM0
Where INLINEFORM0 is the CDF of the standard normal distribution. DISPLAYFORM0
We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor INLINEFORM0 . DISPLAYFORM0
To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices INLINEFORM0 and INLINEFORM1 to all zeros, which yields no signal and some noise.
We trained a set of models with identical architecture (the MoE-256 model described in Appendix SECREF65 ), using different values of INLINEFORM0 and INLINEFORM1 . We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in INLINEFORM2 and INLINEFORM3 , as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.
Results are reported in Table TABREF58 . All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of INLINEFORM0 had lower loads on the most overloaded expert.
Hierachical Mixture of Experts
If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. If the hierarchical MoE consists of INLINEFORM0 groups of INLINEFORM1 experts each, we denote the primary gating network by INLINEFORM2 , the secondary gating networks by INLINEFORM3 , and the expert networks by INLINEFORM4 . The output of the MoE is given by: DISPLAYFORM0
Our metrics of expert utilization change to the following: DISPLAYFORM0 DISPLAYFORM1
INLINEFORM0 and INLINEFORM1 deonte the INLINEFORM2 functions for the primary gating network and INLINEFORM3 secondary gating network respectively. INLINEFORM4 denotes the subset of INLINEFORM5 for which INLINEFORM6 .
It would seem simpler to let INLINEFORM0 , but this would not have a gradient with respect to the primary gating network, so we use the formulation above.
1 Billion Word Language Modeling Benchmark - Experimental Details
Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer BIBREF15 , BIBREF29 , a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput BIBREF43 to the layer output, dropping each activation with probability INLINEFORM0 , otherwise dividing by INLINEFORM1 . After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow BIBREF37 .
Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.
The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity:
MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096.
MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024.
4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.
LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions BIBREF41 . The next timestep of the LSTM receives the projected output. This is identical to one of the models published in BIBREF2 . We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.
The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section SECREF3 . Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in BIBREF2 . For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluate our model using perplexity on the holdout dataset, used by BIBREF28 , BIBREF2 . We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table TABREF76 . For each model, we report the test perplexity, the computational budget, the parameter counts, the value of INLINEFORM0 , and the computational efficiency.
We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 BIBREF41 . MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best INLINEFORM0 for each model, and trained each model for 10 epochs.
The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .
100 Billion Word Google News Corpus - Experimental Details
The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively.
Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words.
We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:
The Adam optimizer BIBREF39 keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set INLINEFORM0 . To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad BIBREF36 .
We evaluate our model using perplexity on a holdout dataset. Results are reported in Table TABREF81 . Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing BIBREF40 .
Machine Translation - Experimental Details
Our model is a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention . All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow BIBREF37 . Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as “wordpieces") BIBREF42 for inputs and outputs in our system.
We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in BIBREF3 .
We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use INLINEFORM0 and the hierarchical MoE models use INLINEFORM1 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains INLINEFORM2 parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix SECREF93 .
We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section UID14 , not the scheme from Appendix SECREF93 . The MoE layers in the encoder and decoder are non-hierarchical MoEs with INLINEFORM0 experts, and INLINEFORM1 . Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep.
We trained our networks using the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to BIBREF3 , we applied dropout BIBREF43 to the output of all embedding, LSTM and MoE layers, using INLINEFORM0 . Training was done synchronously on a cluster of up to 64 GPUs as described in section SECREF3 . Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in BIBREF31 .
Tables TABREF42 , TABREF43 and TABREF44 in Section SECREF39 show comparisons of our results to other published methods. Figure FIGREF91 shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.
We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table TABREF92 . For example, one expert is used when the indefinite article “a" introduces the direct object in a verb phrase indicating importance or leadership.
Strictly Balanced Gating
Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below.
Recall that we define the softmax gating function to be: DISPLAYFORM0
To obtain a sparse gating vector, we multiply INLINEFORM0 component-wise with a sparse mask INLINEFORM1 and normalize the output. The mask itself is a function of INLINEFORM2 and specifies which experts are assigned to each input example: DISPLAYFORM0
To implement top-k gating in this formulation, we would let INLINEFORM0 , where: DISPLAYFORM0
To force each expert to receive the exact same number of examples, we introduce an alternative mask function, INLINEFORM0 , which operates over batches of input vectors. Instead of keeping the top INLINEFORM1 values per example, we keep the top INLINEFORM2 values per expert across the training batch, where INLINEFORM3 , so that each example is sent to an average of INLINEFORM4 experts. DISPLAYFORM0
As our experiments suggest and also observed in BIBREF38 , using a batchwise function during training (such as INLINEFORM0 ) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector INLINEFORM1 of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: DISPLAYFORM0
To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. DISPLAYFORM0
Attention Function
The attention mechanism described in GNMT BIBREF3 involves a learned “Attention Function" INLINEFORM0 which takes a “source vector" INLINEFORM1 and a “target vector" INLINEFORM2 , and must be computed for every source time step INLINEFORM3 and target time step INLINEFORM4 . In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size INLINEFORM5 . It can be expressed as: DISPLAYFORM0
Where INLINEFORM0 and INLINEFORM1 are trainable weight matrices and INLINEFORM2 is a trainable weight vector.
For performance reasons, in our models, we used a slightly different attention function: DISPLAYFORM0
With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. | 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3, perplexity scores are also better, On the Google Production dataset, our model achieved 1.01 higher test BLEU score |
e8fcfb1412c3b30da6cbc0766152b6e11e17196c | e8fcfb1412c3b30da6cbc0766152b6e11e17196c_0 | Q: What improvement does the MOE model make over the SOTA on language modelling?
Text: Conditional Computation
Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , images BIBREF4 , BIBREF5 , and audio BIBREF6 , BIBREF7 . For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.
Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.
While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges:
Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.
Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.
Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.
Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. BIBREF13 use three such terms. These issues can affect both model quality and load-balancing.
Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters.
In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets.
Our Approach: The Sparsely-Gated Mixture-of-Experts Layer
Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure FIGREF8 ). All parts of the network are trained jointly by back-propagation.
While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers BIBREF15 , as in Figure FIGREF8 . The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix SECREF84 Table TABREF92 ). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost.
Related work on Mixtures of Experts
Since its introduction more than two decades ago BIBREF16 , BIBREF17 , the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs BIBREF18 , Gaussian Processes BIBREF19 , BIBREF20 , BIBREF21 , Dirichlet Processes BIBREF22 , and deep networks. Other work has focused on different expert configurations such as a hierarchical structure BIBREF23 , infinite numbers of experts BIBREF24 , and adding experts sequentially BIBREF25 . BIBREF26 suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model.
The works above concern top-level mixtures of experts. The mixture of experts is the whole model. BIBREF10 introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.
Our work builds on this use of MoEs as a general purpose neural network component. While BIBREF10 uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity.
The Structure of the Mixture-of-Experts layer
The Mixture-of-Experts (MoE) layer consists of a set of INLINEFORM0 “expert networks" INLINEFORM1 , and a “gating network" INLINEFORM2 whose output is a sparse INLINEFORM3 -dimensional vector. Figure FIGREF8 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.
Let us denote by INLINEFORM0 and INLINEFORM1 the output of the gating network and the output of the INLINEFORM2 -th expert network for a given input INLINEFORM3 . The output INLINEFORM4 of the MoE module can be written as follows: DISPLAYFORM0
We save computation based on the sparsity of the output of INLINEFORM0 . Wherever INLINEFORM1 , we need not compute INLINEFORM2 . In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix SECREF60 .
Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in BIBREF12 . A MoE whose experts have one hidden layer is similar to the block-wise dropout described in BIBREF13 , where the dropped-out layer is sandwiched between fully-activated layers.
Gating Network
A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0
We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1
We train the gating network by simple back-propagation, along with the rest of the model. If we choose INLINEFORM0 , the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in BIBREF9 with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from BIBREF13 who use boolean gates and a REINFORCE-style approach to train the gating network.
The Shrinking Batch Problem
On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses INLINEFORM0 out of INLINEFORM1 experts for each example, then for a batch of INLINEFORM2 examples, each expert receives a much smaller batch of approximately INLINEFORM3 examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size:
In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over INLINEFORM0 devices, and each device processes a batch of size INLINEFORM1 , each expert receives a batch of approximately INLINEFORM2 examples. Thus, we achieve a factor of INLINEFORM3 improvement in expert batch size.
In the case of a hierarchical MoE (Section SECREF60 ), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.
This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.
In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.
We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. BIBREF27 describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size.
Network Bandwidth
Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes INLINEFORM0 _ INLINEFORM1 _ INLINEFORM2 and INLINEFORM3 _ INLINEFORM4 _ INLINEFORM5 , the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers.
Balancing Expert Utilization
We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. BIBREF10 describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. BIBREF13 include a soft constraint on the batch-wise average of each gate.
We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss INLINEFORM0 , which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor INLINEFORM1 . This additional loss encourages all experts to have equal importance. DISPLAYFORM0 DISPLAYFORM1
While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, INLINEFORM0 , which ensures balanced loads. Appendix SECREF51 contains the definition of this function, along with experimental results.
1 Billion Word Language Modeling Benchmark
This dataset, introduced by BIBREF28 consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.
The best previously published results BIBREF2 use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers BIBREF15 , BIBREF29 . The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure FIGREF32 -right.
Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure FIGREF8 ). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix SECREF65 .
To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input.
The results of these models are shown in Figure FIGREF32 -left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.
In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
We trained our models using TensorFlow BIBREF30 on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total.
For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix SECREF65 , Table TABREF76 .
100 Billion Word Google News Corpus
On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure FIGREF32 -left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements.
We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix SECREF78 .
Figure FIGREF37 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets.
Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU.
Machine Translation (Single Language Pair)
Our model was a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix SECREF84 .
We benchmarked our method on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in BIBREF3 : newstest2014 was used as the test set to compare against previous work BIBREF31 , BIBREF32 , BIBREF3 , while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data.
Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
Multilingual Machine Translation
BIBREF35 train a single GNMT BIBREF3 model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix SECREF84 for details on model architecture. We train our model on the same dataset as BIBREF35 and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.
Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table TABREF50 . The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English INLINEFORM0 Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.
Conclusion
This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come.
Appendices
tocsectionAppendices
Load-Balancing Loss
As discussed in section SECREF4 , for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator INLINEFORM0 of the number of examples assigned to each expert for a batch INLINEFORM1 of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define INLINEFORM2 as the probability that INLINEFORM3 is nonzero, given a new random choice of noise on element INLINEFORM4 , but keeping the already-sampled choices of noise on the other elements. To compute INLINEFORM5 , we note that the INLINEFORM6 is nonzero if and only if INLINEFORM7 is greater than the INLINEFORM8 -greatest element of INLINEFORM9 excluding itself. The probability works out to be: DISPLAYFORM0
Where INLINEFORM0 means the kth highest component of INLINEFORM1 , excluding component INLINEFORM2 . Simplifying, we get: DISPLAYFORM0
Where INLINEFORM0 is the CDF of the standard normal distribution. DISPLAYFORM0
We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor INLINEFORM0 . DISPLAYFORM0
To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices INLINEFORM0 and INLINEFORM1 to all zeros, which yields no signal and some noise.
We trained a set of models with identical architecture (the MoE-256 model described in Appendix SECREF65 ), using different values of INLINEFORM0 and INLINEFORM1 . We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in INLINEFORM2 and INLINEFORM3 , as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.
Results are reported in Table TABREF58 . All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of INLINEFORM0 had lower loads on the most overloaded expert.
Hierachical Mixture of Experts
If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. If the hierarchical MoE consists of INLINEFORM0 groups of INLINEFORM1 experts each, we denote the primary gating network by INLINEFORM2 , the secondary gating networks by INLINEFORM3 , and the expert networks by INLINEFORM4 . The output of the MoE is given by: DISPLAYFORM0
Our metrics of expert utilization change to the following: DISPLAYFORM0 DISPLAYFORM1
INLINEFORM0 and INLINEFORM1 deonte the INLINEFORM2 functions for the primary gating network and INLINEFORM3 secondary gating network respectively. INLINEFORM4 denotes the subset of INLINEFORM5 for which INLINEFORM6 .
It would seem simpler to let INLINEFORM0 , but this would not have a gradient with respect to the primary gating network, so we use the formulation above.
1 Billion Word Language Modeling Benchmark - Experimental Details
Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer BIBREF15 , BIBREF29 , a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput BIBREF43 to the layer output, dropping each activation with probability INLINEFORM0 , otherwise dividing by INLINEFORM1 . After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow BIBREF37 .
Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.
The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity:
MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096.
MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024.
4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.
LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions BIBREF41 . The next timestep of the LSTM receives the projected output. This is identical to one of the models published in BIBREF2 . We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.
The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section SECREF3 . Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in BIBREF2 . For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluate our model using perplexity on the holdout dataset, used by BIBREF28 , BIBREF2 . We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table TABREF76 . For each model, we report the test perplexity, the computational budget, the parameter counts, the value of INLINEFORM0 , and the computational efficiency.
We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 BIBREF41 . MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best INLINEFORM0 for each model, and trained each model for 10 epochs.
The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .
100 Billion Word Google News Corpus - Experimental Details
The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively.
Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words.
We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:
The Adam optimizer BIBREF39 keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set INLINEFORM0 . To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad BIBREF36 .
We evaluate our model using perplexity on a holdout dataset. Results are reported in Table TABREF81 . Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing BIBREF40 .
Machine Translation - Experimental Details
Our model is a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention . All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow BIBREF37 . Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as “wordpieces") BIBREF42 for inputs and outputs in our system.
We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in BIBREF3 .
We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use INLINEFORM0 and the hierarchical MoE models use INLINEFORM1 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains INLINEFORM2 parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix SECREF93 .
We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section UID14 , not the scheme from Appendix SECREF93 . The MoE layers in the encoder and decoder are non-hierarchical MoEs with INLINEFORM0 experts, and INLINEFORM1 . Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep.
We trained our networks using the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to BIBREF3 , we applied dropout BIBREF43 to the output of all embedding, LSTM and MoE layers, using INLINEFORM0 . Training was done synchronously on a cluster of up to 64 GPUs as described in section SECREF3 . Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in BIBREF31 .
Tables TABREF42 , TABREF43 and TABREF44 in Section SECREF39 show comparisons of our results to other published methods. Figure FIGREF91 shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.
We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table TABREF92 . For example, one expert is used when the indefinite article “a" introduces the direct object in a verb phrase indicating importance or leadership.
Strictly Balanced Gating
Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below.
Recall that we define the softmax gating function to be: DISPLAYFORM0
To obtain a sparse gating vector, we multiply INLINEFORM0 component-wise with a sparse mask INLINEFORM1 and normalize the output. The mask itself is a function of INLINEFORM2 and specifies which experts are assigned to each input example: DISPLAYFORM0
To implement top-k gating in this formulation, we would let INLINEFORM0 , where: DISPLAYFORM0
To force each expert to receive the exact same number of examples, we introduce an alternative mask function, INLINEFORM0 , which operates over batches of input vectors. Instead of keeping the top INLINEFORM1 values per example, we keep the top INLINEFORM2 values per expert across the training batch, where INLINEFORM3 , so that each example is sent to an average of INLINEFORM4 experts. DISPLAYFORM0
As our experiments suggest and also observed in BIBREF38 , using a batchwise function during training (such as INLINEFORM0 ) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector INLINEFORM1 of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: DISPLAYFORM0
To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. DISPLAYFORM0
Attention Function
The attention mechanism described in GNMT BIBREF3 involves a learned “Attention Function" INLINEFORM0 which takes a “source vector" INLINEFORM1 and a “target vector" INLINEFORM2 , and must be computed for every source time step INLINEFORM3 and target time step INLINEFORM4 . In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size INLINEFORM5 . It can be expressed as: DISPLAYFORM0
Where INLINEFORM0 and INLINEFORM1 are trainable weight matrices and INLINEFORM2 is a trainable weight vector.
For performance reasons, in our models, we used a slightly different attention function: DISPLAYFORM0
With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. | Perpexity is improved from 34.7 to 28.0. |
0cd90e5b79ea426ada0203177c28812a7fc86be5 | 0cd90e5b79ea426ada0203177c28812a7fc86be5_0 | Q: How is the correct number of experts to use decided?
Text: Conditional Computation
Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , images BIBREF4 , BIBREF5 , and audio BIBREF6 , BIBREF7 . For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.
Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.
While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges:
Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.
Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.
Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.
Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. BIBREF13 use three such terms. These issues can affect both model quality and load-balancing.
Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters.
In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets.
Our Approach: The Sparsely-Gated Mixture-of-Experts Layer
Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure FIGREF8 ). All parts of the network are trained jointly by back-propagation.
While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers BIBREF15 , as in Figure FIGREF8 . The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix SECREF84 Table TABREF92 ). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost.
Related work on Mixtures of Experts
Since its introduction more than two decades ago BIBREF16 , BIBREF17 , the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs BIBREF18 , Gaussian Processes BIBREF19 , BIBREF20 , BIBREF21 , Dirichlet Processes BIBREF22 , and deep networks. Other work has focused on different expert configurations such as a hierarchical structure BIBREF23 , infinite numbers of experts BIBREF24 , and adding experts sequentially BIBREF25 . BIBREF26 suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model.
The works above concern top-level mixtures of experts. The mixture of experts is the whole model. BIBREF10 introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.
Our work builds on this use of MoEs as a general purpose neural network component. While BIBREF10 uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity.
The Structure of the Mixture-of-Experts layer
The Mixture-of-Experts (MoE) layer consists of a set of INLINEFORM0 “expert networks" INLINEFORM1 , and a “gating network" INLINEFORM2 whose output is a sparse INLINEFORM3 -dimensional vector. Figure FIGREF8 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.
Let us denote by INLINEFORM0 and INLINEFORM1 the output of the gating network and the output of the INLINEFORM2 -th expert network for a given input INLINEFORM3 . The output INLINEFORM4 of the MoE module can be written as follows: DISPLAYFORM0
We save computation based on the sparsity of the output of INLINEFORM0 . Wherever INLINEFORM1 , we need not compute INLINEFORM2 . In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix SECREF60 .
Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in BIBREF12 . A MoE whose experts have one hidden layer is similar to the block-wise dropout described in BIBREF13 , where the dropped-out layer is sandwiched between fully-activated layers.
Gating Network
A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0
We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1
We train the gating network by simple back-propagation, along with the rest of the model. If we choose INLINEFORM0 , the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in BIBREF9 with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from BIBREF13 who use boolean gates and a REINFORCE-style approach to train the gating network.
The Shrinking Batch Problem
On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses INLINEFORM0 out of INLINEFORM1 experts for each example, then for a batch of INLINEFORM2 examples, each expert receives a much smaller batch of approximately INLINEFORM3 examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size:
In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over INLINEFORM0 devices, and each device processes a batch of size INLINEFORM1 , each expert receives a batch of approximately INLINEFORM2 examples. Thus, we achieve a factor of INLINEFORM3 improvement in expert batch size.
In the case of a hierarchical MoE (Section SECREF60 ), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.
This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.
In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.
We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. BIBREF27 describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size.
Network Bandwidth
Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes INLINEFORM0 _ INLINEFORM1 _ INLINEFORM2 and INLINEFORM3 _ INLINEFORM4 _ INLINEFORM5 , the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers.
Balancing Expert Utilization
We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. BIBREF10 describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. BIBREF13 include a soft constraint on the batch-wise average of each gate.
We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss INLINEFORM0 , which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor INLINEFORM1 . This additional loss encourages all experts to have equal importance. DISPLAYFORM0 DISPLAYFORM1
While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, INLINEFORM0 , which ensures balanced loads. Appendix SECREF51 contains the definition of this function, along with experimental results.
1 Billion Word Language Modeling Benchmark
This dataset, introduced by BIBREF28 consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.
The best previously published results BIBREF2 use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers BIBREF15 , BIBREF29 . The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure FIGREF32 -right.
Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure FIGREF8 ). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix SECREF65 .
To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input.
The results of these models are shown in Figure FIGREF32 -left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.
In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
We trained our models using TensorFlow BIBREF30 on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total.
For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix SECREF65 , Table TABREF76 .
100 Billion Word Google News Corpus
On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure FIGREF32 -left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements.
We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix SECREF78 .
Figure FIGREF37 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets.
Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU.
Machine Translation (Single Language Pair)
Our model was a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix SECREF84 .
We benchmarked our method on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in BIBREF3 : newstest2014 was used as the test set to compare against previous work BIBREF31 , BIBREF32 , BIBREF3 , while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data.
Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
Multilingual Machine Translation
BIBREF35 train a single GNMT BIBREF3 model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix SECREF84 for details on model architecture. We train our model on the same dataset as BIBREF35 and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.
Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table TABREF50 . The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English INLINEFORM0 Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.
Conclusion
This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come.
Appendices
tocsectionAppendices
Load-Balancing Loss
As discussed in section SECREF4 , for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator INLINEFORM0 of the number of examples assigned to each expert for a batch INLINEFORM1 of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define INLINEFORM2 as the probability that INLINEFORM3 is nonzero, given a new random choice of noise on element INLINEFORM4 , but keeping the already-sampled choices of noise on the other elements. To compute INLINEFORM5 , we note that the INLINEFORM6 is nonzero if and only if INLINEFORM7 is greater than the INLINEFORM8 -greatest element of INLINEFORM9 excluding itself. The probability works out to be: DISPLAYFORM0
Where INLINEFORM0 means the kth highest component of INLINEFORM1 , excluding component INLINEFORM2 . Simplifying, we get: DISPLAYFORM0
Where INLINEFORM0 is the CDF of the standard normal distribution. DISPLAYFORM0
We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor INLINEFORM0 . DISPLAYFORM0
To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices INLINEFORM0 and INLINEFORM1 to all zeros, which yields no signal and some noise.
We trained a set of models with identical architecture (the MoE-256 model described in Appendix SECREF65 ), using different values of INLINEFORM0 and INLINEFORM1 . We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in INLINEFORM2 and INLINEFORM3 , as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.
Results are reported in Table TABREF58 . All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of INLINEFORM0 had lower loads on the most overloaded expert.
Hierachical Mixture of Experts
If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. If the hierarchical MoE consists of INLINEFORM0 groups of INLINEFORM1 experts each, we denote the primary gating network by INLINEFORM2 , the secondary gating networks by INLINEFORM3 , and the expert networks by INLINEFORM4 . The output of the MoE is given by: DISPLAYFORM0
Our metrics of expert utilization change to the following: DISPLAYFORM0 DISPLAYFORM1
INLINEFORM0 and INLINEFORM1 deonte the INLINEFORM2 functions for the primary gating network and INLINEFORM3 secondary gating network respectively. INLINEFORM4 denotes the subset of INLINEFORM5 for which INLINEFORM6 .
It would seem simpler to let INLINEFORM0 , but this would not have a gradient with respect to the primary gating network, so we use the formulation above.
1 Billion Word Language Modeling Benchmark - Experimental Details
Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer BIBREF15 , BIBREF29 , a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput BIBREF43 to the layer output, dropping each activation with probability INLINEFORM0 , otherwise dividing by INLINEFORM1 . After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow BIBREF37 .
Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.
The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity:
MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096.
MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024.
4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.
LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions BIBREF41 . The next timestep of the LSTM receives the projected output. This is identical to one of the models published in BIBREF2 . We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.
The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section SECREF3 . Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in BIBREF2 . For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluate our model using perplexity on the holdout dataset, used by BIBREF28 , BIBREF2 . We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table TABREF76 . For each model, we report the test perplexity, the computational budget, the parameter counts, the value of INLINEFORM0 , and the computational efficiency.
We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 BIBREF41 . MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best INLINEFORM0 for each model, and trained each model for 10 epochs.
The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .
100 Billion Word Google News Corpus - Experimental Details
The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively.
Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words.
We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:
The Adam optimizer BIBREF39 keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set INLINEFORM0 . To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad BIBREF36 .
We evaluate our model using perplexity on a holdout dataset. Results are reported in Table TABREF81 . Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing BIBREF40 .
Machine Translation - Experimental Details
Our model is a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention . All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow BIBREF37 . Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as “wordpieces") BIBREF42 for inputs and outputs in our system.
We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in BIBREF3 .
We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use INLINEFORM0 and the hierarchical MoE models use INLINEFORM1 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains INLINEFORM2 parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix SECREF93 .
We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section UID14 , not the scheme from Appendix SECREF93 . The MoE layers in the encoder and decoder are non-hierarchical MoEs with INLINEFORM0 experts, and INLINEFORM1 . Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep.
We trained our networks using the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to BIBREF3 , we applied dropout BIBREF43 to the output of all embedding, LSTM and MoE layers, using INLINEFORM0 . Training was done synchronously on a cluster of up to 64 GPUs as described in section SECREF3 . Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in BIBREF31 .
Tables TABREF42 , TABREF43 and TABREF44 in Section SECREF39 show comparisons of our results to other published methods. Figure FIGREF91 shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.
We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table TABREF92 . For example, one expert is used when the indefinite article “a" introduces the direct object in a verb phrase indicating importance or leadership.
Strictly Balanced Gating
Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below.
Recall that we define the softmax gating function to be: DISPLAYFORM0
To obtain a sparse gating vector, we multiply INLINEFORM0 component-wise with a sparse mask INLINEFORM1 and normalize the output. The mask itself is a function of INLINEFORM2 and specifies which experts are assigned to each input example: DISPLAYFORM0
To implement top-k gating in this formulation, we would let INLINEFORM0 , where: DISPLAYFORM0
To force each expert to receive the exact same number of examples, we introduce an alternative mask function, INLINEFORM0 , which operates over batches of input vectors. Instead of keeping the top INLINEFORM1 values per example, we keep the top INLINEFORM2 values per expert across the training batch, where INLINEFORM3 , so that each example is sent to an average of INLINEFORM4 experts. DISPLAYFORM0
As our experiments suggest and also observed in BIBREF38 , using a batchwise function during training (such as INLINEFORM0 ) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector INLINEFORM1 of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: DISPLAYFORM0
To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. DISPLAYFORM0
Attention Function
The attention mechanism described in GNMT BIBREF3 involves a learned “Attention Function" INLINEFORM0 which takes a “source vector" INLINEFORM1 and a “target vector" INLINEFORM2 , and must be computed for every source time step INLINEFORM3 and target time step INLINEFORM4 . In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size INLINEFORM5 . It can be expressed as: DISPLAYFORM0
Where INLINEFORM0 and INLINEFORM1 are trainable weight matrices and INLINEFORM2 is a trainable weight vector.
For performance reasons, in our models, we used a slightly different attention function: DISPLAYFORM0
With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. | varied the number of experts between models |
f01a88e15ef518a68d8ca2bec992f27e7a3a6add | f01a88e15ef518a68d8ca2bec992f27e7a3a6add_0 | Q: What equations are used for the trainable gating network?
Text: Conditional Computation
Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , images BIBREF4 , BIBREF5 , and audio BIBREF6 , BIBREF7 . For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.
Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.
While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges:
Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.
Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.
Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.
Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. BIBREF13 use three such terms. These issues can affect both model quality and load-balancing.
Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters.
In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets.
Our Approach: The Sparsely-Gated Mixture-of-Experts Layer
Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure FIGREF8 ). All parts of the network are trained jointly by back-propagation.
While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers BIBREF15 , as in Figure FIGREF8 . The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix SECREF84 Table TABREF92 ). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost.
Related work on Mixtures of Experts
Since its introduction more than two decades ago BIBREF16 , BIBREF17 , the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs BIBREF18 , Gaussian Processes BIBREF19 , BIBREF20 , BIBREF21 , Dirichlet Processes BIBREF22 , and deep networks. Other work has focused on different expert configurations such as a hierarchical structure BIBREF23 , infinite numbers of experts BIBREF24 , and adding experts sequentially BIBREF25 . BIBREF26 suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model.
The works above concern top-level mixtures of experts. The mixture of experts is the whole model. BIBREF10 introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.
Our work builds on this use of MoEs as a general purpose neural network component. While BIBREF10 uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity.
The Structure of the Mixture-of-Experts layer
The Mixture-of-Experts (MoE) layer consists of a set of INLINEFORM0 “expert networks" INLINEFORM1 , and a “gating network" INLINEFORM2 whose output is a sparse INLINEFORM3 -dimensional vector. Figure FIGREF8 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.
Let us denote by INLINEFORM0 and INLINEFORM1 the output of the gating network and the output of the INLINEFORM2 -th expert network for a given input INLINEFORM3 . The output INLINEFORM4 of the MoE module can be written as follows: DISPLAYFORM0
We save computation based on the sparsity of the output of INLINEFORM0 . Wherever INLINEFORM1 , we need not compute INLINEFORM2 . In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix SECREF60 .
Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in BIBREF12 . A MoE whose experts have one hidden layer is similar to the block-wise dropout described in BIBREF13 , where the dropped-out layer is sandwiched between fully-activated layers.
Gating Network
A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0
We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1
We train the gating network by simple back-propagation, along with the rest of the model. If we choose INLINEFORM0 , the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in BIBREF9 with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from BIBREF13 who use boolean gates and a REINFORCE-style approach to train the gating network.
The Shrinking Batch Problem
On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses INLINEFORM0 out of INLINEFORM1 experts for each example, then for a batch of INLINEFORM2 examples, each expert receives a much smaller batch of approximately INLINEFORM3 examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size:
In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over INLINEFORM0 devices, and each device processes a batch of size INLINEFORM1 , each expert receives a batch of approximately INLINEFORM2 examples. Thus, we achieve a factor of INLINEFORM3 improvement in expert batch size.
In the case of a hierarchical MoE (Section SECREF60 ), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.
This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.
In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.
We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. BIBREF27 describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size.
Network Bandwidth
Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes INLINEFORM0 _ INLINEFORM1 _ INLINEFORM2 and INLINEFORM3 _ INLINEFORM4 _ INLINEFORM5 , the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers.
Balancing Expert Utilization
We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. BIBREF10 describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. BIBREF13 include a soft constraint on the batch-wise average of each gate.
We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss INLINEFORM0 , which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor INLINEFORM1 . This additional loss encourages all experts to have equal importance. DISPLAYFORM0 DISPLAYFORM1
While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, INLINEFORM0 , which ensures balanced loads. Appendix SECREF51 contains the definition of this function, along with experimental results.
1 Billion Word Language Modeling Benchmark
This dataset, introduced by BIBREF28 consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.
The best previously published results BIBREF2 use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers BIBREF15 , BIBREF29 . The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure FIGREF32 -right.
Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure FIGREF8 ). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix SECREF65 .
To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input.
The results of these models are shown in Figure FIGREF32 -left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.
In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
We trained our models using TensorFlow BIBREF30 on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total.
For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix SECREF65 , Table TABREF76 .
100 Billion Word Google News Corpus
On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure FIGREF32 -left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements.
We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix SECREF78 .
Figure FIGREF37 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets.
Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU.
Machine Translation (Single Language Pair)
Our model was a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix SECREF84 .
We benchmarked our method on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in BIBREF3 : newstest2014 was used as the test set to compare against previous work BIBREF31 , BIBREF32 , BIBREF3 , while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data.
Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
Multilingual Machine Translation
BIBREF35 train a single GNMT BIBREF3 model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix SECREF84 for details on model architecture. We train our model on the same dataset as BIBREF35 and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.
Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table TABREF50 . The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English INLINEFORM0 Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.
Conclusion
This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come.
Appendices
tocsectionAppendices
Load-Balancing Loss
As discussed in section SECREF4 , for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator INLINEFORM0 of the number of examples assigned to each expert for a batch INLINEFORM1 of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define INLINEFORM2 as the probability that INLINEFORM3 is nonzero, given a new random choice of noise on element INLINEFORM4 , but keeping the already-sampled choices of noise on the other elements. To compute INLINEFORM5 , we note that the INLINEFORM6 is nonzero if and only if INLINEFORM7 is greater than the INLINEFORM8 -greatest element of INLINEFORM9 excluding itself. The probability works out to be: DISPLAYFORM0
Where INLINEFORM0 means the kth highest component of INLINEFORM1 , excluding component INLINEFORM2 . Simplifying, we get: DISPLAYFORM0
Where INLINEFORM0 is the CDF of the standard normal distribution. DISPLAYFORM0
We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor INLINEFORM0 . DISPLAYFORM0
To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices INLINEFORM0 and INLINEFORM1 to all zeros, which yields no signal and some noise.
We trained a set of models with identical architecture (the MoE-256 model described in Appendix SECREF65 ), using different values of INLINEFORM0 and INLINEFORM1 . We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in INLINEFORM2 and INLINEFORM3 , as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.
Results are reported in Table TABREF58 . All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of INLINEFORM0 had lower loads on the most overloaded expert.
Hierachical Mixture of Experts
If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts", each of which is itself a secondary mixture-of-experts with its own gating network. If the hierarchical MoE consists of INLINEFORM0 groups of INLINEFORM1 experts each, we denote the primary gating network by INLINEFORM2 , the secondary gating networks by INLINEFORM3 , and the expert networks by INLINEFORM4 . The output of the MoE is given by: DISPLAYFORM0
Our metrics of expert utilization change to the following: DISPLAYFORM0 DISPLAYFORM1
INLINEFORM0 and INLINEFORM1 deonte the INLINEFORM2 functions for the primary gating network and INLINEFORM3 secondary gating network respectively. INLINEFORM4 denotes the subset of INLINEFORM5 for which INLINEFORM6 .
It would seem simpler to let INLINEFORM0 , but this would not have a gradient with respect to the primary gating network, so we use the formulation above.
1 Billion Word Language Modeling Benchmark - Experimental Details
Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer BIBREF15 , BIBREF29 , a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput BIBREF43 to the layer output, dropping each activation with probability INLINEFORM0 , otherwise dividing by INLINEFORM1 . After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow BIBREF37 .
Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.
The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity:
MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096.
MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024.
4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.
LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions BIBREF41 . The next timestep of the LSTM receives the projected output. This is identical to one of the models published in BIBREF2 . We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.
The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section SECREF3 . Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in BIBREF2 . For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluate our model using perplexity on the holdout dataset, used by BIBREF28 , BIBREF2 . We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table TABREF76 . For each model, we report the test perplexity, the computational budget, the parameter counts, the value of INLINEFORM0 , and the computational efficiency.
We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 BIBREF41 . MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best INLINEFORM0 for each model, and trained each model for 10 epochs.
The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .
100 Billion Word Google News Corpus - Experimental Details
The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively.
Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words.
We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:
The Adam optimizer BIBREF39 keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set INLINEFORM0 . To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad BIBREF36 .
We evaluate our model using perplexity on a holdout dataset. Results are reported in Table TABREF81 . Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing BIBREF40 .
Machine Translation - Experimental Details
Our model is a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention . All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow BIBREF37 . Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as “wordpieces") BIBREF42 for inputs and outputs in our system.
We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in BIBREF3 .
We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use INLINEFORM0 and the hierarchical MoE models use INLINEFORM1 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains INLINEFORM2 parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix SECREF93 .
We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section UID14 , not the scheme from Appendix SECREF93 . The MoE layers in the encoder and decoder are non-hierarchical MoEs with INLINEFORM0 experts, and INLINEFORM1 . Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep.
We trained our networks using the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to BIBREF3 , we applied dropout BIBREF43 to the output of all embedding, LSTM and MoE layers, using INLINEFORM0 . Training was done synchronously on a cluster of up to 64 GPUs as described in section SECREF3 . Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.
To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .
We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in BIBREF31 .
Tables TABREF42 , TABREF43 and TABREF44 in Section SECREF39 show comparisons of our results to other published methods. Figure FIGREF91 shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.
We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table TABREF92 . For example, one expert is used when the indefinite article “a" introduces the direct object in a verb phrase indicating importance or leadership.
Strictly Balanced Gating
Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below.
Recall that we define the softmax gating function to be: DISPLAYFORM0
To obtain a sparse gating vector, we multiply INLINEFORM0 component-wise with a sparse mask INLINEFORM1 and normalize the output. The mask itself is a function of INLINEFORM2 and specifies which experts are assigned to each input example: DISPLAYFORM0
To implement top-k gating in this formulation, we would let INLINEFORM0 , where: DISPLAYFORM0
To force each expert to receive the exact same number of examples, we introduce an alternative mask function, INLINEFORM0 , which operates over batches of input vectors. Instead of keeping the top INLINEFORM1 values per example, we keep the top INLINEFORM2 values per expert across the training batch, where INLINEFORM3 , so that each example is sent to an average of INLINEFORM4 experts. DISPLAYFORM0
As our experiments suggest and also observed in BIBREF38 , using a batchwise function during training (such as INLINEFORM0 ) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector INLINEFORM1 of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: DISPLAYFORM0
To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. DISPLAYFORM0
Attention Function
The attention mechanism described in GNMT BIBREF3 involves a learned “Attention Function" INLINEFORM0 which takes a “source vector" INLINEFORM1 and a “target vector" INLINEFORM2 , and must be computed for every source time step INLINEFORM3 and target time step INLINEFORM4 . In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size INLINEFORM5 . It can be expressed as: DISPLAYFORM0
Where INLINEFORM0 and INLINEFORM1 are trainable weight matrices and INLINEFORM2 is a trainable weight vector.
For performance reasons, in our models, we used a slightly different attention function: DISPLAYFORM0
With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. | DISPLAYFORM0, DISPLAYFORM0 DISPLAYFORM1 |
44104668796a6ca10e2ea3ecf706541da1cec2cf | 44104668796a6ca10e2ea3ecf706541da1cec2cf_0 | Q: What is the difference in performance between the interpretable system (e.g. vectors and cosine distance) and LSTM with ELMo system?
Text: Introduction
Spelling error correction is a fundamental NLP task. Most language processing applications benefit greatly from being provided clean texts for their best performance. Human users of computers also often expect competent help in making spelling of their texts correct.
Because of the lack of tests of many common spelling correction methods for Polish, it is useful to establish how they perform in a simple scenario. We constrain ourselves to the pure task of isolated correction of non-word errors. They are traditionally separated in error correction literature BIBREF0 . Non-word errors are here incorrect word forms that not only differ from what was intended, but also do not constitute another, existing word themselves. Much of the initial research on error correction focused on this simple task, tackled without means of taking the context of the nearest words into account.
It is true that, especially in the case of neural networks, it is often possible and desirable to combine problems of error detection, correction and context awareness into one task trained with a supervised training procedure. In language correction research for English language also grammatical and regular spelling errors have been treated uniformly with much success BIBREF1 .
However, when more traditional methods are used, because of their predictability and interpretability for example, one can mix and match various approaches to dealing with the subproblems of detection, correction and context handling (often equivalent to employing some kind of a language model). We call it a modular approach to building spelling error correction systems. There is recent research where this paradigm was applied, interestingly, to convolutional networks trained separately for various subtasks BIBREF2 . In similar setups it is more useful to assess abilities of various solutions in isolation. The exact architecture of a spelling correction system should depend on characteristics of texts it will work on.
Similar considerations eliminated from our focus handcrafted solutions for the whole spelling correction pipeline, primarily the LanguageTool BIBREF3 . Its performance in fixing spelling of Polish tweets was already tested BIBREF4 . For our purposes it would be given an unfair advantage, since it is a rule-based system making heavy use of words in context of the error.
Problems of spelling correction for Polish
Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .
These existing works pointed out more general, potentially useful qualities specific to spelling errors in Polish language texts. It is, primarily, the problem of leaving out diacritical signs, or, more rarely, adding them in wrong places. This phenomenon stems from using a variant of the US keyboard layout, where combinations of AltGr with some alphabetic keys produces characters unique to Polish. When the user forgets or neglects to press the AltGr key, typos such as writing *olowek instead of ołówek appear. In fact, BIBREF4 managed to get substantial performance on Twitter corpus by using this ”diacritical swapping” alone.
Baseline methods
The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.
Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters.
Vector distance
A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.
The distance between two tokens INLINEFORM0 and INLINEFORM1 is thus defined as INLINEFORM2
Here INLINEFORM0 is just Levenshtein distance between strings, and INLINEFORM1 – cosine distance between vectors. INLINEFORM2 denotes the word vector for INLINEFORM3 . Both distance metrics are in our case roughly in the range [0,1] thanks to the scaling of edit distance performed automatically by Apache Lucene. We used a pretrained set of word embeddings of Polish BIBREF12 , obtained with the flavor word2vec procedure using skipgrams and negative sampling BIBREF13 .
Recurrent neural networks
Another powerful approach, if conceptually simple in linguistic terms, is using a character-based recurrent neural network. Here, we test uni- and bidirectional Long Short-Term Memory networks BIBREF14 that are fed characters of the error as their input and are expected to output its correct form, character after character. This is similar to traditional solutions conceptualizing the spelling error as a chain of characters, which are used as evidence to predict the most likely chain of replacements (original characters). This was done with n-gram methods, Markov chains and other probabilistic models BIBREF15 . Since nowadays neural networks enjoy a large awareness as an element of software infrastructure, with actively maintained packages readily available, their evaluation seems to be the most practically useful. We used the PyTorch BIBREF16 implementation of LSTM in particular.
The bidirectional version BIBREF17 of LSTM reads the character chains forward and backwards at the same time. Predictions from networks running in both directions are averaged.
In order to provide the network an additional, broad picture peek at the whole error form we also evaluated a setup where the internal state of LSTM cells, instead of being initialized randomly, is computed from an ELMo embedding BIBREF18 of the token. The ELMo embedder is capable of integrating linguistic information carried by the whole form (probably often not much in case of errors), as well as the string as a character chain. The latter is processed with a convolutional neural network. How this representation is constructed is informed by the whole corpus on which the embedder was trained. The pretrained ELMo model that we used BIBREF19 was trained on Wikipedia and Common Crawl corpora of Polish.
The ELMo embedding network outputs three layers as matrices, which are supposed to reflect subsequent compositional layers of language, from phonetic phenomena at the bottom to lexical ones at the top. A weighted sum of these layers is computed, with weights trained along with the LSTM error-correcting network. Then we apply a trained linear transformation, followed by INLINEFORM0 non-linearity: INLINEFORM1
(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional.
Experimental setup
PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors.
The corpus features texts that are descriptive rather than conversational, contain relatively many proper names and are more likely to have been at least skimmed by the authors before submitting for online publication. Error cases provided by PlEWi are, therefore, not a balanced representation of spelling errors in written Polish language. PlEWi does have the advantage of scale in comparison to existing literature, such as BIBREF4 operating on a set of only 740 annotated errors in tweets.
All methods were tested on a test subset of 25% of cases, with 75% left for training (where needed) and 5% for development.
The methods that required training – namely recurrent neural networks – had their loss measured as cross-entropy loss measure between correct character labels and predictions. This value was minimized with Adam algorithm BIBREF22 . The networks were trained for 35 epochs.
Results
The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.
On the other hand, the vector distance method was able to bring a discernible improvement over pure Levenshtein distance, comparable even with the most basic LSTM. It is possible that assigning more fine-tuned weights to edit distance and semantic distance would make the quality of predictions even higher. The idea of using vector space measurements explicitly can be also expanded if we were to consider the problem of contextualizing corrections. For example, the semantic distance of proposed corrections to the nearest words is likely to carry much information about their appropriateness. Looking from another angle, searching for words that seem semantically off in context may be a good heuristic for detecting errors that are not nonword (that is, they lead to wrong forms appearing in text which are nevertheless in-vocabulary).
The good performance of recurrent network methods is hardly a surprise, given observed effectiveness of neural networks in many NLP tasks in the recent decade. It seems that bidirectional LSTM augmented with ELMo may already hit the limit for correcting Polish spelling errors without contextual information. While it improves accuracy in comparison to LSTM initialized withrandom noise, it makes the test cross-entropy slightly worse, which hints at overfitting. The perplexity measures actually increase sharply for more sophisticated architectures. Perplexity should show how little probability is assigned by the model to true answers. We measure it as INLINEFORM0
where INLINEFORM0 is a sequence of INLINEFORM1 characters, forming the correct version of the word, and INLINEFORM2 is the estimated probability of the INLINEFORM3 th character, given previous predicted characters and the incorrect form. The observed increase of perplexity for increasingly accurate models is most likely due to more refined predicted probability distributions, which go beyond just assigning the bulk of probability to the best answer.
Interesting insights can be gained from weights assigned by optimization to layers of ELMo network, which are taken as the word form embedding (Table TABREF5 ). The first layer, and the one that is nearest to input of the network, is given relatively the least importance, while the middle one dominates both others taken together. This suggests that in error correction, at least for Polish, the middle level of morphemes and other characteristic character chunks is more important than phenomena that are low-level or tied to some specific words. This observation should be taken into account in further research on practical solutions for spelling correction.
Conclusion
Among the methods tested the bidirectional LSTM, especially initialized by ELMo embeddings, offers the best accuracy and raw performance. Adding ELMo to a straightforward PyTorch implementation of LSTM may be easier now than at the time of performing our tests, as since then the authors of ELMoForManyLangs package BIBREF19 improved their programmatic interface. However, if a more interpretable and explainable output is required, some version of vector distance combined with edit distance may be the best direction. It should be noted that this method produces multiple candidate corrections with their similarity scores, as opposed to only one “best guess“ correction that can be obtained from a character-based LSTM. This is important in applications where it is up to humans to the make the final decision, and they are only to be aided by a machine.
It is desirable for further reasearch to expand the corpus material into a wider and more representative set of texts. Nevertheless, the solution for any practical case has to be tailored to its characteristic error patterns. Works on language correction for English show that available corpora can be ”boosted” BIBREF1 , i.e. expanded by generating new errors consistent with a generative model inferred from the data. This may greatly aid in developing models that are dependent on learning from error corpora.
A deliberate omission in this paper are the elements accompanying most real-word error correction solutions. Some fairly obvious approaches to integrating evidence from context include n-grams and Markov chains, although the possibility of using measurements in spaces of semantic vectors was already mentioned in this article. Similarly, non-word errors can be easily detected with comparing tokens against reference vocabulary, but in practice one should have ways of detecting mistakes masquerading as real words and fixing bad segmentation (tokens that are glued together or improperly separated). Testing how performant are various methods for dealing with these problems in Polish language is left for future research. | Accuracy of best interpretible system was 0.3945 while accuracy of LSTM-ELMo net was 0.6818. |
bbcd77aac74989f820e84488c52f3767d0405d51 | bbcd77aac74989f820e84488c52f3767d0405d51_0 | Q: What solutions are proposed for error detection and context awareness?
Text: Introduction
Spelling error correction is a fundamental NLP task. Most language processing applications benefit greatly from being provided clean texts for their best performance. Human users of computers also often expect competent help in making spelling of their texts correct.
Because of the lack of tests of many common spelling correction methods for Polish, it is useful to establish how they perform in a simple scenario. We constrain ourselves to the pure task of isolated correction of non-word errors. They are traditionally separated in error correction literature BIBREF0 . Non-word errors are here incorrect word forms that not only differ from what was intended, but also do not constitute another, existing word themselves. Much of the initial research on error correction focused on this simple task, tackled without means of taking the context of the nearest words into account.
It is true that, especially in the case of neural networks, it is often possible and desirable to combine problems of error detection, correction and context awareness into one task trained with a supervised training procedure. In language correction research for English language also grammatical and regular spelling errors have been treated uniformly with much success BIBREF1 .
However, when more traditional methods are used, because of their predictability and interpretability for example, one can mix and match various approaches to dealing with the subproblems of detection, correction and context handling (often equivalent to employing some kind of a language model). We call it a modular approach to building spelling error correction systems. There is recent research where this paradigm was applied, interestingly, to convolutional networks trained separately for various subtasks BIBREF2 . In similar setups it is more useful to assess abilities of various solutions in isolation. The exact architecture of a spelling correction system should depend on characteristics of texts it will work on.
Similar considerations eliminated from our focus handcrafted solutions for the whole spelling correction pipeline, primarily the LanguageTool BIBREF3 . Its performance in fixing spelling of Polish tweets was already tested BIBREF4 . For our purposes it would be given an unfair advantage, since it is a rule-based system making heavy use of words in context of the error.
Problems of spelling correction for Polish
Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .
These existing works pointed out more general, potentially useful qualities specific to spelling errors in Polish language texts. It is, primarily, the problem of leaving out diacritical signs, or, more rarely, adding them in wrong places. This phenomenon stems from using a variant of the US keyboard layout, where combinations of AltGr with some alphabetic keys produces characters unique to Polish. When the user forgets or neglects to press the AltGr key, typos such as writing *olowek instead of ołówek appear. In fact, BIBREF4 managed to get substantial performance on Twitter corpus by using this ”diacritical swapping” alone.
Baseline methods
The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.
Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters.
Vector distance
A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.
The distance between two tokens INLINEFORM0 and INLINEFORM1 is thus defined as INLINEFORM2
Here INLINEFORM0 is just Levenshtein distance between strings, and INLINEFORM1 – cosine distance between vectors. INLINEFORM2 denotes the word vector for INLINEFORM3 . Both distance metrics are in our case roughly in the range [0,1] thanks to the scaling of edit distance performed automatically by Apache Lucene. We used a pretrained set of word embeddings of Polish BIBREF12 , obtained with the flavor word2vec procedure using skipgrams and negative sampling BIBREF13 .
Recurrent neural networks
Another powerful approach, if conceptually simple in linguistic terms, is using a character-based recurrent neural network. Here, we test uni- and bidirectional Long Short-Term Memory networks BIBREF14 that are fed characters of the error as their input and are expected to output its correct form, character after character. This is similar to traditional solutions conceptualizing the spelling error as a chain of characters, which are used as evidence to predict the most likely chain of replacements (original characters). This was done with n-gram methods, Markov chains and other probabilistic models BIBREF15 . Since nowadays neural networks enjoy a large awareness as an element of software infrastructure, with actively maintained packages readily available, their evaluation seems to be the most practically useful. We used the PyTorch BIBREF16 implementation of LSTM in particular.
The bidirectional version BIBREF17 of LSTM reads the character chains forward and backwards at the same time. Predictions from networks running in both directions are averaged.
In order to provide the network an additional, broad picture peek at the whole error form we also evaluated a setup where the internal state of LSTM cells, instead of being initialized randomly, is computed from an ELMo embedding BIBREF18 of the token. The ELMo embedder is capable of integrating linguistic information carried by the whole form (probably often not much in case of errors), as well as the string as a character chain. The latter is processed with a convolutional neural network. How this representation is constructed is informed by the whole corpus on which the embedder was trained. The pretrained ELMo model that we used BIBREF19 was trained on Wikipedia and Common Crawl corpora of Polish.
The ELMo embedding network outputs three layers as matrices, which are supposed to reflect subsequent compositional layers of language, from phonetic phenomena at the bottom to lexical ones at the top. A weighted sum of these layers is computed, with weights trained along with the LSTM error-correcting network. Then we apply a trained linear transformation, followed by INLINEFORM0 non-linearity: INLINEFORM1
(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional.
Experimental setup
PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors.
The corpus features texts that are descriptive rather than conversational, contain relatively many proper names and are more likely to have been at least skimmed by the authors before submitting for online publication. Error cases provided by PlEWi are, therefore, not a balanced representation of spelling errors in written Polish language. PlEWi does have the advantage of scale in comparison to existing literature, such as BIBREF4 operating on a set of only 740 annotated errors in tweets.
All methods were tested on a test subset of 25% of cases, with 75% left for training (where needed) and 5% for development.
The methods that required training – namely recurrent neural networks – had their loss measured as cross-entropy loss measure between correct character labels and predictions. This value was minimized with Adam algorithm BIBREF22 . The networks were trained for 35 epochs.
Results
The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.
On the other hand, the vector distance method was able to bring a discernible improvement over pure Levenshtein distance, comparable even with the most basic LSTM. It is possible that assigning more fine-tuned weights to edit distance and semantic distance would make the quality of predictions even higher. The idea of using vector space measurements explicitly can be also expanded if we were to consider the problem of contextualizing corrections. For example, the semantic distance of proposed corrections to the nearest words is likely to carry much information about their appropriateness. Looking from another angle, searching for words that seem semantically off in context may be a good heuristic for detecting errors that are not nonword (that is, they lead to wrong forms appearing in text which are nevertheless in-vocabulary).
The good performance of recurrent network methods is hardly a surprise, given observed effectiveness of neural networks in many NLP tasks in the recent decade. It seems that bidirectional LSTM augmented with ELMo may already hit the limit for correcting Polish spelling errors without contextual information. While it improves accuracy in comparison to LSTM initialized withrandom noise, it makes the test cross-entropy slightly worse, which hints at overfitting. The perplexity measures actually increase sharply for more sophisticated architectures. Perplexity should show how little probability is assigned by the model to true answers. We measure it as INLINEFORM0
where INLINEFORM0 is a sequence of INLINEFORM1 characters, forming the correct version of the word, and INLINEFORM2 is the estimated probability of the INLINEFORM3 th character, given previous predicted characters and the incorrect form. The observed increase of perplexity for increasingly accurate models is most likely due to more refined predicted probability distributions, which go beyond just assigning the bulk of probability to the best answer.
Interesting insights can be gained from weights assigned by optimization to layers of ELMo network, which are taken as the word form embedding (Table TABREF5 ). The first layer, and the one that is nearest to input of the network, is given relatively the least importance, while the middle one dominates both others taken together. This suggests that in error correction, at least for Polish, the middle level of morphemes and other characteristic character chunks is more important than phenomena that are low-level or tied to some specific words. This observation should be taken into account in further research on practical solutions for spelling correction.
Conclusion
Among the methods tested the bidirectional LSTM, especially initialized by ELMo embeddings, offers the best accuracy and raw performance. Adding ELMo to a straightforward PyTorch implementation of LSTM may be easier now than at the time of performing our tests, as since then the authors of ELMoForManyLangs package BIBREF19 improved their programmatic interface. However, if a more interpretable and explainable output is required, some version of vector distance combined with edit distance may be the best direction. It should be noted that this method produces multiple candidate corrections with their similarity scores, as opposed to only one “best guess“ correction that can be obtained from a character-based LSTM. This is important in applications where it is up to humans to the make the final decision, and they are only to be aided by a machine.
It is desirable for further reasearch to expand the corpus material into a wider and more representative set of texts. Nevertheless, the solution for any practical case has to be tailored to its characteristic error patterns. Works on language correction for English show that available corpora can be ”boosted” BIBREF1 , i.e. expanded by generating new errors consistent with a generative model inferred from the data. This may greatly aid in developing models that are dependent on learning from error corpora.
A deliberate omission in this paper are the elements accompanying most real-word error correction solutions. Some fairly obvious approaches to integrating evidence from context include n-grams and Markov chains, although the possibility of using measurements in spaces of semantic vectors was already mentioned in this article. Similarly, non-word errors can be easily detected with comparing tokens against reference vocabulary, but in practice one should have ways of detecting mistakes masquerading as real words and fixing bad segmentation (tokens that are glued together or improperly separated). Testing how performant are various methods for dealing with these problems in Polish language is left for future research. | Unanswerable |
6a31bd676054222faf46229fc1d283322478a020 | 6a31bd676054222faf46229fc1d283322478a020_0 | Q: How is PIEWi annotated?
Text: Introduction
Spelling error correction is a fundamental NLP task. Most language processing applications benefit greatly from being provided clean texts for their best performance. Human users of computers also often expect competent help in making spelling of their texts correct.
Because of the lack of tests of many common spelling correction methods for Polish, it is useful to establish how they perform in a simple scenario. We constrain ourselves to the pure task of isolated correction of non-word errors. They are traditionally separated in error correction literature BIBREF0 . Non-word errors are here incorrect word forms that not only differ from what was intended, but also do not constitute another, existing word themselves. Much of the initial research on error correction focused on this simple task, tackled without means of taking the context of the nearest words into account.
It is true that, especially in the case of neural networks, it is often possible and desirable to combine problems of error detection, correction and context awareness into one task trained with a supervised training procedure. In language correction research for English language also grammatical and regular spelling errors have been treated uniformly with much success BIBREF1 .
However, when more traditional methods are used, because of their predictability and interpretability for example, one can mix and match various approaches to dealing with the subproblems of detection, correction and context handling (often equivalent to employing some kind of a language model). We call it a modular approach to building spelling error correction systems. There is recent research where this paradigm was applied, interestingly, to convolutional networks trained separately for various subtasks BIBREF2 . In similar setups it is more useful to assess abilities of various solutions in isolation. The exact architecture of a spelling correction system should depend on characteristics of texts it will work on.
Similar considerations eliminated from our focus handcrafted solutions for the whole spelling correction pipeline, primarily the LanguageTool BIBREF3 . Its performance in fixing spelling of Polish tweets was already tested BIBREF4 . For our purposes it would be given an unfair advantage, since it is a rule-based system making heavy use of words in context of the error.
Problems of spelling correction for Polish
Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .
These existing works pointed out more general, potentially useful qualities specific to spelling errors in Polish language texts. It is, primarily, the problem of leaving out diacritical signs, or, more rarely, adding them in wrong places. This phenomenon stems from using a variant of the US keyboard layout, where combinations of AltGr with some alphabetic keys produces characters unique to Polish. When the user forgets or neglects to press the AltGr key, typos such as writing *olowek instead of ołówek appear. In fact, BIBREF4 managed to get substantial performance on Twitter corpus by using this ”diacritical swapping” alone.
Baseline methods
The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.
Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters.
Vector distance
A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.
The distance between two tokens INLINEFORM0 and INLINEFORM1 is thus defined as INLINEFORM2
Here INLINEFORM0 is just Levenshtein distance between strings, and INLINEFORM1 – cosine distance between vectors. INLINEFORM2 denotes the word vector for INLINEFORM3 . Both distance metrics are in our case roughly in the range [0,1] thanks to the scaling of edit distance performed automatically by Apache Lucene. We used a pretrained set of word embeddings of Polish BIBREF12 , obtained with the flavor word2vec procedure using skipgrams and negative sampling BIBREF13 .
Recurrent neural networks
Another powerful approach, if conceptually simple in linguistic terms, is using a character-based recurrent neural network. Here, we test uni- and bidirectional Long Short-Term Memory networks BIBREF14 that are fed characters of the error as their input and are expected to output its correct form, character after character. This is similar to traditional solutions conceptualizing the spelling error as a chain of characters, which are used as evidence to predict the most likely chain of replacements (original characters). This was done with n-gram methods, Markov chains and other probabilistic models BIBREF15 . Since nowadays neural networks enjoy a large awareness as an element of software infrastructure, with actively maintained packages readily available, their evaluation seems to be the most practically useful. We used the PyTorch BIBREF16 implementation of LSTM in particular.
The bidirectional version BIBREF17 of LSTM reads the character chains forward and backwards at the same time. Predictions from networks running in both directions are averaged.
In order to provide the network an additional, broad picture peek at the whole error form we also evaluated a setup where the internal state of LSTM cells, instead of being initialized randomly, is computed from an ELMo embedding BIBREF18 of the token. The ELMo embedder is capable of integrating linguistic information carried by the whole form (probably often not much in case of errors), as well as the string as a character chain. The latter is processed with a convolutional neural network. How this representation is constructed is informed by the whole corpus on which the embedder was trained. The pretrained ELMo model that we used BIBREF19 was trained on Wikipedia and Common Crawl corpora of Polish.
The ELMo embedding network outputs three layers as matrices, which are supposed to reflect subsequent compositional layers of language, from phonetic phenomena at the bottom to lexical ones at the top. A weighted sum of these layers is computed, with weights trained along with the LSTM error-correcting network. Then we apply a trained linear transformation, followed by INLINEFORM0 non-linearity: INLINEFORM1
(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional.
Experimental setup
PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors.
The corpus features texts that are descriptive rather than conversational, contain relatively many proper names and are more likely to have been at least skimmed by the authors before submitting for online publication. Error cases provided by PlEWi are, therefore, not a balanced representation of spelling errors in written Polish language. PlEWi does have the advantage of scale in comparison to existing literature, such as BIBREF4 operating on a set of only 740 annotated errors in tweets.
All methods were tested on a test subset of 25% of cases, with 75% left for training (where needed) and 5% for development.
The methods that required training – namely recurrent neural networks – had their loss measured as cross-entropy loss measure between correct character labels and predictions. This value was minimized with Adam algorithm BIBREF22 . The networks were trained for 35 epochs.
Results
The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.
On the other hand, the vector distance method was able to bring a discernible improvement over pure Levenshtein distance, comparable even with the most basic LSTM. It is possible that assigning more fine-tuned weights to edit distance and semantic distance would make the quality of predictions even higher. The idea of using vector space measurements explicitly can be also expanded if we were to consider the problem of contextualizing corrections. For example, the semantic distance of proposed corrections to the nearest words is likely to carry much information about their appropriateness. Looking from another angle, searching for words that seem semantically off in context may be a good heuristic for detecting errors that are not nonword (that is, they lead to wrong forms appearing in text which are nevertheless in-vocabulary).
The good performance of recurrent network methods is hardly a surprise, given observed effectiveness of neural networks in many NLP tasks in the recent decade. It seems that bidirectional LSTM augmented with ELMo may already hit the limit for correcting Polish spelling errors without contextual information. While it improves accuracy in comparison to LSTM initialized withrandom noise, it makes the test cross-entropy slightly worse, which hints at overfitting. The perplexity measures actually increase sharply for more sophisticated architectures. Perplexity should show how little probability is assigned by the model to true answers. We measure it as INLINEFORM0
where INLINEFORM0 is a sequence of INLINEFORM1 characters, forming the correct version of the word, and INLINEFORM2 is the estimated probability of the INLINEFORM3 th character, given previous predicted characters and the incorrect form. The observed increase of perplexity for increasingly accurate models is most likely due to more refined predicted probability distributions, which go beyond just assigning the bulk of probability to the best answer.
Interesting insights can be gained from weights assigned by optimization to layers of ELMo network, which are taken as the word form embedding (Table TABREF5 ). The first layer, and the one that is nearest to input of the network, is given relatively the least importance, while the middle one dominates both others taken together. This suggests that in error correction, at least for Polish, the middle level of morphemes and other characteristic character chunks is more important than phenomena that are low-level or tied to some specific words. This observation should be taken into account in further research on practical solutions for spelling correction.
Conclusion
Among the methods tested the bidirectional LSTM, especially initialized by ELMo embeddings, offers the best accuracy and raw performance. Adding ELMo to a straightforward PyTorch implementation of LSTM may be easier now than at the time of performing our tests, as since then the authors of ELMoForManyLangs package BIBREF19 improved their programmatic interface. However, if a more interpretable and explainable output is required, some version of vector distance combined with edit distance may be the best direction. It should be noted that this method produces multiple candidate corrections with their similarity scores, as opposed to only one “best guess“ correction that can be obtained from a character-based LSTM. This is important in applications where it is up to humans to the make the final decision, and they are only to be aided by a machine.
It is desirable for further reasearch to expand the corpus material into a wider and more representative set of texts. Nevertheless, the solution for any practical case has to be tailored to its characteristic error patterns. Works on language correction for English show that available corpora can be ”boosted” BIBREF1 , i.e. expanded by generating new errors consistent with a generative model inferred from the data. This may greatly aid in developing models that are dependent on learning from error corpora.
A deliberate omission in this paper are the elements accompanying most real-word error correction solutions. Some fairly obvious approaches to integrating evidence from context include n-grams and Markov chains, although the possibility of using measurements in spaces of semantic vectors was already mentioned in this article. Similarly, non-word errors can be easily detected with comparing tokens against reference vocabulary, but in practice one should have ways of detecting mistakes masquerading as real words and fixing bad segmentation (tokens that are glued together or improperly separated). Testing how performant are various methods for dealing with these problems in Polish language is left for future research. | [error, correction] pairs |
e4d16050f0b457c93e590261732a20401def9cde | e4d16050f0b457c93e590261732a20401def9cde_0 | Q: What methods are tested in PIEWi?
Text: Introduction
Spelling error correction is a fundamental NLP task. Most language processing applications benefit greatly from being provided clean texts for their best performance. Human users of computers also often expect competent help in making spelling of their texts correct.
Because of the lack of tests of many common spelling correction methods for Polish, it is useful to establish how they perform in a simple scenario. We constrain ourselves to the pure task of isolated correction of non-word errors. They are traditionally separated in error correction literature BIBREF0 . Non-word errors are here incorrect word forms that not only differ from what was intended, but also do not constitute another, existing word themselves. Much of the initial research on error correction focused on this simple task, tackled without means of taking the context of the nearest words into account.
It is true that, especially in the case of neural networks, it is often possible and desirable to combine problems of error detection, correction and context awareness into one task trained with a supervised training procedure. In language correction research for English language also grammatical and regular spelling errors have been treated uniformly with much success BIBREF1 .
However, when more traditional methods are used, because of their predictability and interpretability for example, one can mix and match various approaches to dealing with the subproblems of detection, correction and context handling (often equivalent to employing some kind of a language model). We call it a modular approach to building spelling error correction systems. There is recent research where this paradigm was applied, interestingly, to convolutional networks trained separately for various subtasks BIBREF2 . In similar setups it is more useful to assess abilities of various solutions in isolation. The exact architecture of a spelling correction system should depend on characteristics of texts it will work on.
Similar considerations eliminated from our focus handcrafted solutions for the whole spelling correction pipeline, primarily the LanguageTool BIBREF3 . Its performance in fixing spelling of Polish tweets was already tested BIBREF4 . For our purposes it would be given an unfair advantage, since it is a rule-based system making heavy use of words in context of the error.
Problems of spelling correction for Polish
Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .
These existing works pointed out more general, potentially useful qualities specific to spelling errors in Polish language texts. It is, primarily, the problem of leaving out diacritical signs, or, more rarely, adding them in wrong places. This phenomenon stems from using a variant of the US keyboard layout, where combinations of AltGr with some alphabetic keys produces characters unique to Polish. When the user forgets or neglects to press the AltGr key, typos such as writing *olowek instead of ołówek appear. In fact, BIBREF4 managed to get substantial performance on Twitter corpus by using this ”diacritical swapping” alone.
Baseline methods
The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.
Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters.
Vector distance
A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.
The distance between two tokens INLINEFORM0 and INLINEFORM1 is thus defined as INLINEFORM2
Here INLINEFORM0 is just Levenshtein distance between strings, and INLINEFORM1 – cosine distance between vectors. INLINEFORM2 denotes the word vector for INLINEFORM3 . Both distance metrics are in our case roughly in the range [0,1] thanks to the scaling of edit distance performed automatically by Apache Lucene. We used a pretrained set of word embeddings of Polish BIBREF12 , obtained with the flavor word2vec procedure using skipgrams and negative sampling BIBREF13 .
Recurrent neural networks
Another powerful approach, if conceptually simple in linguistic terms, is using a character-based recurrent neural network. Here, we test uni- and bidirectional Long Short-Term Memory networks BIBREF14 that are fed characters of the error as their input and are expected to output its correct form, character after character. This is similar to traditional solutions conceptualizing the spelling error as a chain of characters, which are used as evidence to predict the most likely chain of replacements (original characters). This was done with n-gram methods, Markov chains and other probabilistic models BIBREF15 . Since nowadays neural networks enjoy a large awareness as an element of software infrastructure, with actively maintained packages readily available, their evaluation seems to be the most practically useful. We used the PyTorch BIBREF16 implementation of LSTM in particular.
The bidirectional version BIBREF17 of LSTM reads the character chains forward and backwards at the same time. Predictions from networks running in both directions are averaged.
In order to provide the network an additional, broad picture peek at the whole error form we also evaluated a setup where the internal state of LSTM cells, instead of being initialized randomly, is computed from an ELMo embedding BIBREF18 of the token. The ELMo embedder is capable of integrating linguistic information carried by the whole form (probably often not much in case of errors), as well as the string as a character chain. The latter is processed with a convolutional neural network. How this representation is constructed is informed by the whole corpus on which the embedder was trained. The pretrained ELMo model that we used BIBREF19 was trained on Wikipedia and Common Crawl corpora of Polish.
The ELMo embedding network outputs three layers as matrices, which are supposed to reflect subsequent compositional layers of language, from phonetic phenomena at the bottom to lexical ones at the top. A weighted sum of these layers is computed, with weights trained along with the LSTM error-correcting network. Then we apply a trained linear transformation, followed by INLINEFORM0 non-linearity: INLINEFORM1
(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional.
Experimental setup
PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors.
The corpus features texts that are descriptive rather than conversational, contain relatively many proper names and are more likely to have been at least skimmed by the authors before submitting for online publication. Error cases provided by PlEWi are, therefore, not a balanced representation of spelling errors in written Polish language. PlEWi does have the advantage of scale in comparison to existing literature, such as BIBREF4 operating on a set of only 740 annotated errors in tweets.
All methods were tested on a test subset of 25% of cases, with 75% left for training (where needed) and 5% for development.
The methods that required training – namely recurrent neural networks – had their loss measured as cross-entropy loss measure between correct character labels and predictions. This value was minimized with Adam algorithm BIBREF22 . The networks were trained for 35 epochs.
Results
The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.
On the other hand, the vector distance method was able to bring a discernible improvement over pure Levenshtein distance, comparable even with the most basic LSTM. It is possible that assigning more fine-tuned weights to edit distance and semantic distance would make the quality of predictions even higher. The idea of using vector space measurements explicitly can be also expanded if we were to consider the problem of contextualizing corrections. For example, the semantic distance of proposed corrections to the nearest words is likely to carry much information about their appropriateness. Looking from another angle, searching for words that seem semantically off in context may be a good heuristic for detecting errors that are not nonword (that is, they lead to wrong forms appearing in text which are nevertheless in-vocabulary).
The good performance of recurrent network methods is hardly a surprise, given observed effectiveness of neural networks in many NLP tasks in the recent decade. It seems that bidirectional LSTM augmented with ELMo may already hit the limit for correcting Polish spelling errors without contextual information. While it improves accuracy in comparison to LSTM initialized withrandom noise, it makes the test cross-entropy slightly worse, which hints at overfitting. The perplexity measures actually increase sharply for more sophisticated architectures. Perplexity should show how little probability is assigned by the model to true answers. We measure it as INLINEFORM0
where INLINEFORM0 is a sequence of INLINEFORM1 characters, forming the correct version of the word, and INLINEFORM2 is the estimated probability of the INLINEFORM3 th character, given previous predicted characters and the incorrect form. The observed increase of perplexity for increasingly accurate models is most likely due to more refined predicted probability distributions, which go beyond just assigning the bulk of probability to the best answer.
Interesting insights can be gained from weights assigned by optimization to layers of ELMo network, which are taken as the word form embedding (Table TABREF5 ). The first layer, and the one that is nearest to input of the network, is given relatively the least importance, while the middle one dominates both others taken together. This suggests that in error correction, at least for Polish, the middle level of morphemes and other characteristic character chunks is more important than phenomena that are low-level or tied to some specific words. This observation should be taken into account in further research on practical solutions for spelling correction.
Conclusion
Among the methods tested the bidirectional LSTM, especially initialized by ELMo embeddings, offers the best accuracy and raw performance. Adding ELMo to a straightforward PyTorch implementation of LSTM may be easier now than at the time of performing our tests, as since then the authors of ELMoForManyLangs package BIBREF19 improved their programmatic interface. However, if a more interpretable and explainable output is required, some version of vector distance combined with edit distance may be the best direction. It should be noted that this method produces multiple candidate corrections with their similarity scores, as opposed to only one “best guess“ correction that can be obtained from a character-based LSTM. This is important in applications where it is up to humans to the make the final decision, and they are only to be aided by a machine.
It is desirable for further reasearch to expand the corpus material into a wider and more representative set of texts. Nevertheless, the solution for any practical case has to be tailored to its characteristic error patterns. Works on language correction for English show that available corpora can be ”boosted” BIBREF1 , i.e. expanded by generating new errors consistent with a generative model inferred from the data. This may greatly aid in developing models that are dependent on learning from error corpora.
A deliberate omission in this paper are the elements accompanying most real-word error correction solutions. Some fairly obvious approaches to integrating evidence from context include n-grams and Markov chains, although the possibility of using measurements in spaces of semantic vectors was already mentioned in this article. Similarly, non-word errors can be easily detected with comparing tokens against reference vocabulary, but in practice one should have ways of detecting mistakes masquerading as real words and fixing bad segmentation (tokens that are glued together or improperly separated). Testing how performant are various methods for dealing with these problems in Polish language is left for future research. | Levenshtein distance metric BIBREF8, diacritical swapping, Levenshtein distance is used in a weighted sum to cosine distance between word vectors, ELMo-augmented LSTM |
b25e7137f49f77e7e67ee2f40ca585d3a377f8b5 | b25e7137f49f77e7e67ee2f40ca585d3a377f8b5_0 | Q: Which specific error correction solutions have been proposed for specialized corpora in the past?
Text: Introduction
Spelling error correction is a fundamental NLP task. Most language processing applications benefit greatly from being provided clean texts for their best performance. Human users of computers also often expect competent help in making spelling of their texts correct.
Because of the lack of tests of many common spelling correction methods for Polish, it is useful to establish how they perform in a simple scenario. We constrain ourselves to the pure task of isolated correction of non-word errors. They are traditionally separated in error correction literature BIBREF0 . Non-word errors are here incorrect word forms that not only differ from what was intended, but also do not constitute another, existing word themselves. Much of the initial research on error correction focused on this simple task, tackled without means of taking the context of the nearest words into account.
It is true that, especially in the case of neural networks, it is often possible and desirable to combine problems of error detection, correction and context awareness into one task trained with a supervised training procedure. In language correction research for English language also grammatical and regular spelling errors have been treated uniformly with much success BIBREF1 .
However, when more traditional methods are used, because of their predictability and interpretability for example, one can mix and match various approaches to dealing with the subproblems of detection, correction and context handling (often equivalent to employing some kind of a language model). We call it a modular approach to building spelling error correction systems. There is recent research where this paradigm was applied, interestingly, to convolutional networks trained separately for various subtasks BIBREF2 . In similar setups it is more useful to assess abilities of various solutions in isolation. The exact architecture of a spelling correction system should depend on characteristics of texts it will work on.
Similar considerations eliminated from our focus handcrafted solutions for the whole spelling correction pipeline, primarily the LanguageTool BIBREF3 . Its performance in fixing spelling of Polish tweets was already tested BIBREF4 . For our purposes it would be given an unfair advantage, since it is a rule-based system making heavy use of words in context of the error.
Problems of spelling correction for Polish
Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .
These existing works pointed out more general, potentially useful qualities specific to spelling errors in Polish language texts. It is, primarily, the problem of leaving out diacritical signs, or, more rarely, adding them in wrong places. This phenomenon stems from using a variant of the US keyboard layout, where combinations of AltGr with some alphabetic keys produces characters unique to Polish. When the user forgets or neglects to press the AltGr key, typos such as writing *olowek instead of ołówek appear. In fact, BIBREF4 managed to get substantial performance on Twitter corpus by using this ”diacritical swapping” alone.
Baseline methods
The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.
Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters.
Vector distance
A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.
The distance between two tokens INLINEFORM0 and INLINEFORM1 is thus defined as INLINEFORM2
Here INLINEFORM0 is just Levenshtein distance between strings, and INLINEFORM1 – cosine distance between vectors. INLINEFORM2 denotes the word vector for INLINEFORM3 . Both distance metrics are in our case roughly in the range [0,1] thanks to the scaling of edit distance performed automatically by Apache Lucene. We used a pretrained set of word embeddings of Polish BIBREF12 , obtained with the flavor word2vec procedure using skipgrams and negative sampling BIBREF13 .
Recurrent neural networks
Another powerful approach, if conceptually simple in linguistic terms, is using a character-based recurrent neural network. Here, we test uni- and bidirectional Long Short-Term Memory networks BIBREF14 that are fed characters of the error as their input and are expected to output its correct form, character after character. This is similar to traditional solutions conceptualizing the spelling error as a chain of characters, which are used as evidence to predict the most likely chain of replacements (original characters). This was done with n-gram methods, Markov chains and other probabilistic models BIBREF15 . Since nowadays neural networks enjoy a large awareness as an element of software infrastructure, with actively maintained packages readily available, their evaluation seems to be the most practically useful. We used the PyTorch BIBREF16 implementation of LSTM in particular.
The bidirectional version BIBREF17 of LSTM reads the character chains forward and backwards at the same time. Predictions from networks running in both directions are averaged.
In order to provide the network an additional, broad picture peek at the whole error form we also evaluated a setup where the internal state of LSTM cells, instead of being initialized randomly, is computed from an ELMo embedding BIBREF18 of the token. The ELMo embedder is capable of integrating linguistic information carried by the whole form (probably often not much in case of errors), as well as the string as a character chain. The latter is processed with a convolutional neural network. How this representation is constructed is informed by the whole corpus on which the embedder was trained. The pretrained ELMo model that we used BIBREF19 was trained on Wikipedia and Common Crawl corpora of Polish.
The ELMo embedding network outputs three layers as matrices, which are supposed to reflect subsequent compositional layers of language, from phonetic phenomena at the bottom to lexical ones at the top. A weighted sum of these layers is computed, with weights trained along with the LSTM error-correcting network. Then we apply a trained linear transformation, followed by INLINEFORM0 non-linearity: INLINEFORM1
(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional.
Experimental setup
PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors.
The corpus features texts that are descriptive rather than conversational, contain relatively many proper names and are more likely to have been at least skimmed by the authors before submitting for online publication. Error cases provided by PlEWi are, therefore, not a balanced representation of spelling errors in written Polish language. PlEWi does have the advantage of scale in comparison to existing literature, such as BIBREF4 operating on a set of only 740 annotated errors in tweets.
All methods were tested on a test subset of 25% of cases, with 75% left for training (where needed) and 5% for development.
The methods that required training – namely recurrent neural networks – had their loss measured as cross-entropy loss measure between correct character labels and predictions. This value was minimized with Adam algorithm BIBREF22 . The networks were trained for 35 epochs.
Results
The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.
On the other hand, the vector distance method was able to bring a discernible improvement over pure Levenshtein distance, comparable even with the most basic LSTM. It is possible that assigning more fine-tuned weights to edit distance and semantic distance would make the quality of predictions even higher. The idea of using vector space measurements explicitly can be also expanded if we were to consider the problem of contextualizing corrections. For example, the semantic distance of proposed corrections to the nearest words is likely to carry much information about their appropriateness. Looking from another angle, searching for words that seem semantically off in context may be a good heuristic for detecting errors that are not nonword (that is, they lead to wrong forms appearing in text which are nevertheless in-vocabulary).
The good performance of recurrent network methods is hardly a surprise, given observed effectiveness of neural networks in many NLP tasks in the recent decade. It seems that bidirectional LSTM augmented with ELMo may already hit the limit for correcting Polish spelling errors without contextual information. While it improves accuracy in comparison to LSTM initialized withrandom noise, it makes the test cross-entropy slightly worse, which hints at overfitting. The perplexity measures actually increase sharply for more sophisticated architectures. Perplexity should show how little probability is assigned by the model to true answers. We measure it as INLINEFORM0
where INLINEFORM0 is a sequence of INLINEFORM1 characters, forming the correct version of the word, and INLINEFORM2 is the estimated probability of the INLINEFORM3 th character, given previous predicted characters and the incorrect form. The observed increase of perplexity for increasingly accurate models is most likely due to more refined predicted probability distributions, which go beyond just assigning the bulk of probability to the best answer.
Interesting insights can be gained from weights assigned by optimization to layers of ELMo network, which are taken as the word form embedding (Table TABREF5 ). The first layer, and the one that is nearest to input of the network, is given relatively the least importance, while the middle one dominates both others taken together. This suggests that in error correction, at least for Polish, the middle level of morphemes and other characteristic character chunks is more important than phenomena that are low-level or tied to some specific words. This observation should be taken into account in further research on practical solutions for spelling correction.
Conclusion
Among the methods tested the bidirectional LSTM, especially initialized by ELMo embeddings, offers the best accuracy and raw performance. Adding ELMo to a straightforward PyTorch implementation of LSTM may be easier now than at the time of performing our tests, as since then the authors of ELMoForManyLangs package BIBREF19 improved their programmatic interface. However, if a more interpretable and explainable output is required, some version of vector distance combined with edit distance may be the best direction. It should be noted that this method produces multiple candidate corrections with their similarity scores, as opposed to only one “best guess“ correction that can be obtained from a character-based LSTM. This is important in applications where it is up to humans to the make the final decision, and they are only to be aided by a machine.
It is desirable for further reasearch to expand the corpus material into a wider and more representative set of texts. Nevertheless, the solution for any practical case has to be tailored to its characteristic error patterns. Works on language correction for English show that available corpora can be ”boosted” BIBREF1 , i.e. expanded by generating new errors consistent with a generative model inferred from the data. This may greatly aid in developing models that are dependent on learning from error corpora.
A deliberate omission in this paper are the elements accompanying most real-word error correction solutions. Some fairly obvious approaches to integrating evidence from context include n-grams and Markov chains, although the possibility of using measurements in spaces of semantic vectors was already mentioned in this article. Similarly, non-word errors can be easily detected with comparing tokens against reference vocabulary, but in practice one should have ways of detecting mistakes masquerading as real words and fixing bad segmentation (tokens that are glued together or improperly separated). Testing how performant are various methods for dealing with these problems in Polish language is left for future research. | spellchecking mammography reports and tweets BIBREF7 , BIBREF4 |
d803b782023553bbf9b36551fbc888ad189b1f29 | d803b782023553bbf9b36551fbc888ad189b1f29_0 | Q: What was the criteria for human evaluation?
Text: Introduction
Task-oriented dialog systems are becoming increasingly popular, as they can assist users in various daily activities such as ticket booking and restaurant reservations. In a typical task-oriented dialog system, the Natural Language Generation (NLG) module plays a crucial role: it converts a system action (often specified in a semantic form selected by a dialog policy) into a final response in natural language. Hence, the response should be adequate to represent semantic dialog actions, and fluent to engage users' attention. As the ultimate interface to interacts with users, NLG plays a significant impact on the users' experience.
Existing methods for NLG can be broadly summarized into two major categories. $({1})$ Template-based methods require domain experts to handcraft templates for each domain, and the system fills in slot-values afterward BIBREF0, BIBREF1. Thus, the produced responses are often adequate to contain the required semantic information, but not always fluent and nature, hurting users' experiences. $({2})$ Statistical language models such as neural networks BIBREF2 learn to generate fluent responses via training from labelled corpus. One canonical model is semantically conditioned LSTM (SC-LSTM) BIBREF3, which encodes dialog acts with one-hot representations and uses it as an extra feature to inform the sentence generation process. Despite its good performance on simple domains, it requires large amounts of domain-specific annotated data which is not available for many domains in real-world applications. Even worse, this renders severe scalability issues when the number of possible combinations of dialog acts grows exponentially with the number of slots in more complex domains.
We revisit the current research benchmarks for NLG, and notice that each dialog domain is extensively labelled to favor model training. However, this is in contrast to the real-world application scenarios, where only very limited amounts of labelled data are available for new domains. To simulate such a few-shot learning setting, we have developed a new benchmark dataset, called FewShotWOZ, based on the MultiWOZ BIBREF4 and Cambridge NLG datasets BIBREF5. FewShotWOZ consists of dialog utterances from 7 domains. For each domain, we provide less than 50 labeled utterances for fine-tuning. We believe that FewShotWOZ can better inspire research to address the challenge of learning data-hungry statistical models with very limited amounts of labelled data in real-world scenarios.
To deal with the challenge of few-shot learning, we develop the SC-GPT model. SC-GPT is a multi-layer Transformer neural language model, trained in three steps: $({1})$ Pre-trained on plain text, similar to GPT-2 BIBREF6; $({2})$ Continuously pre-trained on large amounts of dialog-act labeled utterances corpora to acquire the ability of controllable generation; $({3})$ Fine-tuned for a target domain using very limited amounts of domain labels. Unlike GPT-2, SC-GPT generates semantically controlled responses that are conditioned on the given semantic form, similar to SC-LSTM but requiring much less domain labels to generalize to new domains.
In summary, our key contributions are three-fold:
A new benchmark FewShotWOZ is introduced to simulate the few-shot adaptation setting where only a handful of training data from each domain is available.
We propose a new model SC-GPT. To our best knowledge, this work is the first study of exploiting state-of-the-art pre-trained language models for NLG in task-oriented dialog systems.
On the MultiWOZ dataset, SC-GPT creates a new SOTA, outperforming previous models by 4 points in BLEU. On FewShotWOZ, SC-GPT outperforms several strong baselines such as SC-LSTM and HDSA BIBREF7, showing that SC-GPT adapts to new domain much more effectively, requiring much smaller amounts of in-domain labels. We release our code and dataset for reproducible research.
Background
A typical task-oriented spoken dialog system uses a pipeline architecture, as shown in Figure FIGREF2 (a), where each dialog turn is processed using a four-step procedure. $({1})$ Transcriptions of user’s input are first passed to the natural language understanding (NLU) module, where the user’s intention and other key information are extracted. $({2})$ This information is then formatted as the input to dialog state tracking (DST), which maintains the current state of the dialog. $({3})$ Outputs of DST are passed to the dialog policy module, which produces a dialog act based on the facts or entities retrieved from external resources (such as a database or a knowledge base). $({4})$ The dialog act emitted by the dialog policy module serves as the input to the NLG, through which a system response in natural language is generated. In this paper, we focus on the NLG component of task-oriented dialog systems, how to produce natural language responses conditioned on dialog acts.
Specifically, dialog act $$ is defined as the combination of intent $$ and slot-value pairs $\lbrace (s_i, v_i)\rbrace ^P_{i=1}$:
where $P$ is the number of pairs, which varies in different dialog acts.
Intents are usually used to distinguish different types of system actions. Typical examples include inform, request, confirm, select
Slot-value pairs indicate the category and content of the information to express in the utterance, respectively.
The goal of NLG is to translate $$ into a natural language response $= [x_1, \cdots , x_T]$, where $T$ is the sequence length. In Figure FIGREF2 (b), we show an example of the dialog act: $\textit {\texttt {confirm}~(name=Hilton, area=center)}$, and the corresponding natural language response is “Let me confirm that you are searching for Hilton in the center area”.
Semantically Conditioned GPT
We tackle this generation problem using conditional neural language models. Given training data of $N$ samples $=\lbrace (_n, _n)\rbrace _{n=1}^{N}$, our goal is to build a statistical model parameterized by $$ to characterize $p_{}(| )$. To leverage the sequential structure of response, one may further decompose the joint probability of $$ using the chain rule, casting an auto-regressive generation process as follows:
where $x_{<t}$ indicates all tokens before $t$.
Learning $$ is performed via maximizing the log-likelihood (MLE) of the conditional probabilities in (DISPLAY_FORM13) over the entire training dataset:
In this paper, we employ the Transformers BIBREF8 to parameterize the conditionals in (DISPLAY_FORM13). To enable strong generalization and controllable ability for the learned model, we propose the following three-stage procedure as the training recipe.
Semantically Conditioned GPT ::: Massive Plain Language Pre-training.
Large models trained on massive training corpus usually generalize better to new domains. Inspired by this, we inherit the GPT-2 architecture BIBREF6 as the backbone language model. GPT-2 is an auto-regressive language model that leverages 12-24 layers of masked, multi-head self-attention Transformers. GPT-2 is pre-trained on extremely massive text data OpenWebText BIBREF6. It has demonstrated superior performance on characterizing human language data distribution and knowledge transfer. Given text prompts, GPT-2 can often generate realistic sentences.
Semantically Conditioned GPT ::: Dialog-Act Controlled Pre-training.
To enable the guidance of dialog act in response generation, we propose to continuously pre-train the GPT-2 model on large amounts of annotated (dialog act, response) pairs. The pre-training dataset includes annotated training pairs from Schema-Guided Dialog corpus, MultiWOZ corpus, Frame corpus, and Facebook Multilingual Dialog Corpus. The total size of the pre-training corpus is around 400k examples.
We firstly pre-process dialog act $$ into a sequence of control codes using the following format:
Meanwhile, the output sequence $^{\prime }$ is pre-processed via appending $$ with a special start token [BOS] and an end token [EOS]. Finally, the sequentialized dialog act $^{\prime }$ is concatenated with its augmented response $^{\prime }$, and then fed into GPT-2. During training, the prediction loss is only computed for $^{\prime }$, and $^{\prime }$ provides the attended conditions. Since the dialog act represents the semantics of the generated sentences, we follow the naming convention of SC-LSTM, and term our model as Semantically Conditioned Generative Pre-training (SC-GPT). The overall architecture of SC-GPT is illustrated in Figure FIGREF12.
Semantically Conditioned GPT ::: Fine-tuning.
For a new domain, a dialog act usually contains novel intents or slot-value pairs, and annotated training samples are often limited. We fine-tune SC-GPT on limited amounts of domain-specific labels for adaptation. The fine-tuning follows the same procedure of dialog-act controlled pre-training, as described above, but uses only a few dozens of domain labels.
It is worth noticing that the above recipe has several favorable properties:
Flexibility. SC-GPT operates on a sequence of tokens without delexicalization, which means that SC-GPT does not assume a fixed one-hot or tree-structured dialog act representation vectors. Hence, it has great flexibility in extending to novel dialog acts.
Controllability. In contrast to GPT-2 that generates natural sentences without high-level semantic guidance, SC-GPT can generate sentences with adequate intent and slot-value information and maintain its fluency.
Generalizability. SC-GPT is able to generalize significantly better than SC-LSTM, due to the pre-training on massive plain text corpora and annotated dialog datasets.
Dataset: FewShotWOZ ::: Revisiting NLG Benchmarks.
The three commonly used NLG datasets in developing and evaluating task-oriented dialog systems are E2E NLG BIBREF9 BAGEL BIBREF10 and RNNLG BIBREF5, as summarized in Table TABREF23. We observe two issues from their shared statistics: $({1})$ All the datasets contain a large number of labelled training samples for each domain, ranging from hundreds to tens of thousands. However, the cost of labeling is high in practice, labeling 50 utterances is 5 hours per domain. Creating such an extensively annotated dataset for each new domain is prohibitively expensive. $({2})$ The percentage of distinct delexicalised dialog acts between training and testing data is quite small. For example, the delexicalised dialog acts in testing is 100% covered by the training set for the E2E NLG dataset. It renders difficulties in evaluating the model's generalization ability for new domains.
Dataset: FewShotWOZ ::: FewShotWOZ.
To build a setting for more pragmatic NLG scenarios, we introduce a new dataset FewShotWOZ to better reflect real application complexity, and encourage the community to develop algorithms that are capable of generalizing with only a few domain-specific labels for each (new) domain. The dataset statistics are shown in the last column of Table TABREF23. We see that FewShotWOZ is different from the other datasets in three aspects: $({1})$ More domains. FewShotWOZ contains seven domains in total, which is larger than any existing NLG datasets. $({2})$ Less training instances. Importantly, FewShotWOZ has a much smaller number of training instances per domain, aiming to evaluate the few-shot learning ability. $({3})$ Lower training/testing overlap. FewShotWOZ has only 8.82% overlap, significantly smaller than the other datasets, which amount to more than 90% overlap. The average number of intents per instance in $\mathtt {Attraction}$/ $\mathtt {Taxi}$/ $\mathtt {Train}$ domain is 2, 1.33, and 2.05, respectively. In contrast, there is only one intent for each example in the other datasets. The NLG task defined on FewShotWOZ requires the models to learn to generalize over new compositions of intents. The details of FewShotWOZ is shown in Table TABREF26.
Dataset: FewShotWOZ ::: Collection Protocols.
We construct FewShotWOZ via re-organizing data samples from RNNLG and MultiWOZ datasets BIBREF4. For each domain in RNNLG, we first group utterances according to their delexicalised dialog acts, and keep only one utterance as the target sentence. To ensure diversity, we consider three domains from MultiWOZ: $\mathtt {Attraction}$, $\mathtt {Taxi}$, and $\mathtt {Train}$. Since MultiWOZ is a cross-domain dataset, the dialog act of an utterance may exist in multiple domains. We choose to keep utterances whose dialog act appears only in one domain. Similar delexicalising processing is applied to ensure that each dialog act has only one target utterance. Finally, to simulate the few-shot learning in practice, we randomly sample 50 training examples for each domain, except the $\mathtt {Taxi}$ domain, which has 40 examples.
Related Work ::: Pre-trained Models.
Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to adapt to various downstream tasks. The closest line of research to ours are GPT-2 BIBREF6, CTRL BIBREF15 and Grover BIBREF17. GPT-2 first investigated missive Transformer-based auto-regressive language models with large-scale text data for pre-training. After fine-tuning, GPT-2 achieves drastic improvements on several generation tasks. One drawback of GPT-2 is the lack of high-level semantic controlling ability in language generation. To alleviate this issue, CTRL BIBREF15 was introduced to train the model based on pre-defined codes such as text style, content description, and task-specific behavior, meanwhile Grover BIBREF17 was proposed to generate news articles conditioned on authors, dates Although conceptually similar to our SC-GPT, CTRL and Grover cannot be readily applied to NLG in task-oriented dialog systems, as the conditioning codes are quite different. Another controllable generation work for GPT-2 is PPLM BIBREF18, which provides a decoding scheme to guide the generation process using key-words or classifiers, without re-training the model. In this paper, we focus on pre-training an NLG model conditioned on finer-grained semantic dialog acts, which are more desirable for dialog systems.
Related Work ::: Dialog.
Various dialog systems have been developed BIBREF2, including task-oriented dialog systems such as Rasa, Microsoft Bot Framework, and Conversational Learner, and chit-chat systems such as XiaoIce BIBREF19, DialoGPT BIBREF20, Meena BIBREF21. In this paper, we focus on task-oriented systems, particularly the NLG module. With the blooming of deep learning, neural sequential models have shown powerful capability and flexibility in NLG. Extensive efforts have been made, including new architecture choices such as RNNs BIBREF22, attention RNNs BIBREF23, SC-LSTM BIBREF3 and its variants BIBREF24, BIBREF25, as well as learning objectives BIBREF26. However, they all require large amounts of annotated data to reach satisfactory performance. A more realistic scenario is to require much less labeling and improve the sample efficiency of models, This is especially important when deploying the models to new domains, where dialog acts need to be labelled from scratch. Our paper aims to formally set up such a research scenario by proposing a new dataset FewShotWOZ, and a new model SC-GPT.
Experiments
In this section, we evaluate the proposed SC-GPT on the FewShotWOZ and MultiWOZ datasets to answer two research questions: $({1})$ Is SC-GPT an effective model for strong generalization and controllability in dialog response generation? $({2})$ Does FewShotWOZ meet the goal of effectively evaluating the generalization of NLG models in the few-shot learning setting?
Experiments ::: Experimental Setup ::: Implementation details.
The model was built upon Huggingface Pytorch Transformer BIBREF27. We use GPT2-Medium with 345M parameters as the initial checkpoint, and byte pair encodings BIBREF28 for the tokenization. Linear rate scheduler with start rate as 5e-5 was used for both pre-training and fine-tuning. Adam BIBREF29 with weight decay was used to optimize the parameters. For pre-training, the model was trained with a mini-batch of 8 on an 8 Nvidia V100 machine until observing no significant progress on validation loss or up to 20 epochs, whichever is earlier. For fine-tuning on FewShotWOZ, models were trained on each domain separately with five epochs.
Experiments ::: Experimental Setup ::: Automatic metrics.
Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output.
Experiments ::: Experimental Setup ::: Human evaluation.
We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges.
Experiments ::: Experimental Setup ::: Baselines.
We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM.
Experiments ::: FewShotWOZ
Table TABREF33 reports the automatic evaluation performance of different methods on FewShotWOZ. SC-LSTM fails to learn the generation effectively in this few-shot learning setting. The generated utterances are poor in quality and suffer from inaccurate slot rendering. In addition, GPT-2 performs consistently better than SC-LSTM in all the domains. It reveals the feasibility of using a pre-trained language model for NLG, though only limited annotations are available for fine-tuning. Importantly, SC-GPT performs significantly better than GPT and SC-LSTM in terms of both BLEU and ERR. In all the domains, SC-GPT reduces the ERR to a significantly lower level, revealing its strong controllability power. This verifies the importance of pre-training on large annotated dialog data, as SC-GPT learns how to generate utterances specified by the dialog acts accurately.
Table TABREF34 shows the human assessment on FewShotWOZ. The results exhibit the same trend with automatic evaluation. SC-GPT outperforms GPT-2 and SC-LSTM significantly in both metrics, SC-GPT can better control the generation to convey information in the dialog act while maintaining good fluency. Note that the gap between SC-GPT and human annotation is still large, indicating that the proposed FewShotWOZ exhibits an under-explored research area, and provides a large space to encourage future research for improvement.
Experiments ::: MultiWOZ
The results on MultiWOZ are shown in Table TABREF42. Following BIBREF7, Entity F1 BIBREF30 is used to evaluate the entity coverage accuracy (including all slot values, days, numbers, and reference, ). Again, SC-GPT achieves the best performance on BLEU score. Note that GPT-2 performs similarly with SC-GPT on the full MultiWOZ dataset, this is because MultiWOZ contains 57k utterances, which is large enough for GPT-2 to achieve good performance. The results also confirm that with enough annotated data, conditional language model formulation performs significantly better than HDSA, a strong competitor that leverages graph/tree-structure information to encode dialog acts.
To study how SC-GPT performs with different training data sizes. We further conduct experiments with varying percentages of training data on MultiWOZ, ranging from 0.1% (50 examples) to 50%. As shown in Table TABREF43, the observations are consistent with FewShotWOZ. SC-GPT performs consistently better than GPT-2, HDSA, and SC-LSTM for a wide range of dataset sizes, and the improvement is more substantial when the fewer numbers of in-domain labels are used for fine-tuning.
Table TABREF44 shows the human assessment results on MultiWOZ. The results are consistent with the automatic evaluation. It is interesting to see that $({1})$ the gap between the new state-of-the-art method (SC-GPT ) and human performance on FewShotWOZ (as shown in Table TABREF34) is much larger than that on MultiWOZ; $({2})$ the human rating on the naturalness of SC-GPT is even higher than humans on MultiWOZ, while there is a visible gap on FewShotWOZ. These results demonstrate that FewShotWOZ presents a challenging few-shot learning setting, SG-GPT serves as a simple and strong baseline in this setting, and the combined provides a platform for researchers to develop NLG models that are able to generalize to new domains and generate semantically conditioned and fluent responses.
Experiments ::: Analysis
We perform detailed analysis to investigate SG-GPT's flexibility, controllability and generalizability. The test set is split into two subsets - seen and unseen. If a dialog act of an example appears in the training set, the example is marked as seen; otherwise, it is marked as unseen. Table TABREF48 compares different models on the seen and unseen subsets in the $\mathtt {restaurant}$ domain. SC-GPT yields higher BLEU and lower ERR, and the improvement is more significant on the unseen set. For example, SC-GPT reduces ERR to 4.96, an order of magnitude lower than SC-LSTM and only 1/3 of GPT-2. This demonstrates that SC-GPT generalizes well to novel dialog acts, and is able to precisely ground in them to compose fluent responses. This is further confirmed by the quantitative comparison in Table TABREF45, where we compare the generated utterance examples of different models. While the baseline methods prone to over-generate or miss important slots, SC-GPT can successfully generate fluent natural language utterances that share precise semantic conditions with the ground-truth references.
We further simulate the process when deploying SC-GPT for a new domain, using the examples provided in the RASA dialog toolkit . We first fine-tune SC-GPT using a few training examples (only 16 instances in this new domain), and then generate utterances based on novel dialog acts that are unseen in training data, shown in Table TABREF49. In practice, it is desirable for an NLG system to deal with an extending domain whose dialog acts change dynamically. We simulate the setting by editing the original input dialog acts, such as inserting or deleting a slot, or substituting a slot value.
Since SC-LSTM is infeasible in the setting of an extending domain, we compare SC-GPT with GPT-2. Results show that SC-GPT produces better utterances than GPT-2. SC-GPT can generate reasonably good natural language responses with different combinations of editing operations, showing its high flexibility to generalize to new dialog acts with very limited training data, and produce controllable responses.
Conclusion and Future Work
In this paper, we have made two major contributions towards developing a more pragmatic NLG module for task-oriented dialog systems: $({1})$ A new benchmark FewShotWOZ is introduced to simulate the few-shot learning scenarios with scarce labelled data in real-world applications. $({2})$ A new model SC-GPT is proposed to endow the NLG module with strong semantically controlling and generalization ability. Empirical results on both FewShotWOZ and MultiWOZ show that SC-GPT achieves the best overall performance in both automatic and human evaluations.
There are two interesting directions for future work. The first is to design mechanisms to generate more interpersonal responses which are proven to help improve user experiences BIBREF31, BIBREF19. The other is to generalize the generative pre-training idea to all four modules in the dialog system pipeline for end-to-end training. Since these four modules process information in order, one may organize their input/output as segments, and pre-train a segment-level auto-regressive model. | to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness |
fc5f9604c74c9bb804064f315676520937131e17 | fc5f9604c74c9bb804064f315676520937131e17_0 | Q: What automatic metrics are used to measure performance of the system?
Text: Introduction
Task-oriented dialog systems are becoming increasingly popular, as they can assist users in various daily activities such as ticket booking and restaurant reservations. In a typical task-oriented dialog system, the Natural Language Generation (NLG) module plays a crucial role: it converts a system action (often specified in a semantic form selected by a dialog policy) into a final response in natural language. Hence, the response should be adequate to represent semantic dialog actions, and fluent to engage users' attention. As the ultimate interface to interacts with users, NLG plays a significant impact on the users' experience.
Existing methods for NLG can be broadly summarized into two major categories. $({1})$ Template-based methods require domain experts to handcraft templates for each domain, and the system fills in slot-values afterward BIBREF0, BIBREF1. Thus, the produced responses are often adequate to contain the required semantic information, but not always fluent and nature, hurting users' experiences. $({2})$ Statistical language models such as neural networks BIBREF2 learn to generate fluent responses via training from labelled corpus. One canonical model is semantically conditioned LSTM (SC-LSTM) BIBREF3, which encodes dialog acts with one-hot representations and uses it as an extra feature to inform the sentence generation process. Despite its good performance on simple domains, it requires large amounts of domain-specific annotated data which is not available for many domains in real-world applications. Even worse, this renders severe scalability issues when the number of possible combinations of dialog acts grows exponentially with the number of slots in more complex domains.
We revisit the current research benchmarks for NLG, and notice that each dialog domain is extensively labelled to favor model training. However, this is in contrast to the real-world application scenarios, where only very limited amounts of labelled data are available for new domains. To simulate such a few-shot learning setting, we have developed a new benchmark dataset, called FewShotWOZ, based on the MultiWOZ BIBREF4 and Cambridge NLG datasets BIBREF5. FewShotWOZ consists of dialog utterances from 7 domains. For each domain, we provide less than 50 labeled utterances for fine-tuning. We believe that FewShotWOZ can better inspire research to address the challenge of learning data-hungry statistical models with very limited amounts of labelled data in real-world scenarios.
To deal with the challenge of few-shot learning, we develop the SC-GPT model. SC-GPT is a multi-layer Transformer neural language model, trained in three steps: $({1})$ Pre-trained on plain text, similar to GPT-2 BIBREF6; $({2})$ Continuously pre-trained on large amounts of dialog-act labeled utterances corpora to acquire the ability of controllable generation; $({3})$ Fine-tuned for a target domain using very limited amounts of domain labels. Unlike GPT-2, SC-GPT generates semantically controlled responses that are conditioned on the given semantic form, similar to SC-LSTM but requiring much less domain labels to generalize to new domains.
In summary, our key contributions are three-fold:
A new benchmark FewShotWOZ is introduced to simulate the few-shot adaptation setting where only a handful of training data from each domain is available.
We propose a new model SC-GPT. To our best knowledge, this work is the first study of exploiting state-of-the-art pre-trained language models for NLG in task-oriented dialog systems.
On the MultiWOZ dataset, SC-GPT creates a new SOTA, outperforming previous models by 4 points in BLEU. On FewShotWOZ, SC-GPT outperforms several strong baselines such as SC-LSTM and HDSA BIBREF7, showing that SC-GPT adapts to new domain much more effectively, requiring much smaller amounts of in-domain labels. We release our code and dataset for reproducible research.
Background
A typical task-oriented spoken dialog system uses a pipeline architecture, as shown in Figure FIGREF2 (a), where each dialog turn is processed using a four-step procedure. $({1})$ Transcriptions of user’s input are first passed to the natural language understanding (NLU) module, where the user’s intention and other key information are extracted. $({2})$ This information is then formatted as the input to dialog state tracking (DST), which maintains the current state of the dialog. $({3})$ Outputs of DST are passed to the dialog policy module, which produces a dialog act based on the facts or entities retrieved from external resources (such as a database or a knowledge base). $({4})$ The dialog act emitted by the dialog policy module serves as the input to the NLG, through which a system response in natural language is generated. In this paper, we focus on the NLG component of task-oriented dialog systems, how to produce natural language responses conditioned on dialog acts.
Specifically, dialog act $$ is defined as the combination of intent $$ and slot-value pairs $\lbrace (s_i, v_i)\rbrace ^P_{i=1}$:
where $P$ is the number of pairs, which varies in different dialog acts.
Intents are usually used to distinguish different types of system actions. Typical examples include inform, request, confirm, select
Slot-value pairs indicate the category and content of the information to express in the utterance, respectively.
The goal of NLG is to translate $$ into a natural language response $= [x_1, \cdots , x_T]$, where $T$ is the sequence length. In Figure FIGREF2 (b), we show an example of the dialog act: $\textit {\texttt {confirm}~(name=Hilton, area=center)}$, and the corresponding natural language response is “Let me confirm that you are searching for Hilton in the center area”.
Semantically Conditioned GPT
We tackle this generation problem using conditional neural language models. Given training data of $N$ samples $=\lbrace (_n, _n)\rbrace _{n=1}^{N}$, our goal is to build a statistical model parameterized by $$ to characterize $p_{}(| )$. To leverage the sequential structure of response, one may further decompose the joint probability of $$ using the chain rule, casting an auto-regressive generation process as follows:
where $x_{<t}$ indicates all tokens before $t$.
Learning $$ is performed via maximizing the log-likelihood (MLE) of the conditional probabilities in (DISPLAY_FORM13) over the entire training dataset:
In this paper, we employ the Transformers BIBREF8 to parameterize the conditionals in (DISPLAY_FORM13). To enable strong generalization and controllable ability for the learned model, we propose the following three-stage procedure as the training recipe.
Semantically Conditioned GPT ::: Massive Plain Language Pre-training.
Large models trained on massive training corpus usually generalize better to new domains. Inspired by this, we inherit the GPT-2 architecture BIBREF6 as the backbone language model. GPT-2 is an auto-regressive language model that leverages 12-24 layers of masked, multi-head self-attention Transformers. GPT-2 is pre-trained on extremely massive text data OpenWebText BIBREF6. It has demonstrated superior performance on characterizing human language data distribution and knowledge transfer. Given text prompts, GPT-2 can often generate realistic sentences.
Semantically Conditioned GPT ::: Dialog-Act Controlled Pre-training.
To enable the guidance of dialog act in response generation, we propose to continuously pre-train the GPT-2 model on large amounts of annotated (dialog act, response) pairs. The pre-training dataset includes annotated training pairs from Schema-Guided Dialog corpus, MultiWOZ corpus, Frame corpus, and Facebook Multilingual Dialog Corpus. The total size of the pre-training corpus is around 400k examples.
We firstly pre-process dialog act $$ into a sequence of control codes using the following format:
Meanwhile, the output sequence $^{\prime }$ is pre-processed via appending $$ with a special start token [BOS] and an end token [EOS]. Finally, the sequentialized dialog act $^{\prime }$ is concatenated with its augmented response $^{\prime }$, and then fed into GPT-2. During training, the prediction loss is only computed for $^{\prime }$, and $^{\prime }$ provides the attended conditions. Since the dialog act represents the semantics of the generated sentences, we follow the naming convention of SC-LSTM, and term our model as Semantically Conditioned Generative Pre-training (SC-GPT). The overall architecture of SC-GPT is illustrated in Figure FIGREF12.
Semantically Conditioned GPT ::: Fine-tuning.
For a new domain, a dialog act usually contains novel intents or slot-value pairs, and annotated training samples are often limited. We fine-tune SC-GPT on limited amounts of domain-specific labels for adaptation. The fine-tuning follows the same procedure of dialog-act controlled pre-training, as described above, but uses only a few dozens of domain labels.
It is worth noticing that the above recipe has several favorable properties:
Flexibility. SC-GPT operates on a sequence of tokens without delexicalization, which means that SC-GPT does not assume a fixed one-hot or tree-structured dialog act representation vectors. Hence, it has great flexibility in extending to novel dialog acts.
Controllability. In contrast to GPT-2 that generates natural sentences without high-level semantic guidance, SC-GPT can generate sentences with adequate intent and slot-value information and maintain its fluency.
Generalizability. SC-GPT is able to generalize significantly better than SC-LSTM, due to the pre-training on massive plain text corpora and annotated dialog datasets.
Dataset: FewShotWOZ ::: Revisiting NLG Benchmarks.
The three commonly used NLG datasets in developing and evaluating task-oriented dialog systems are E2E NLG BIBREF9 BAGEL BIBREF10 and RNNLG BIBREF5, as summarized in Table TABREF23. We observe two issues from their shared statistics: $({1})$ All the datasets contain a large number of labelled training samples for each domain, ranging from hundreds to tens of thousands. However, the cost of labeling is high in practice, labeling 50 utterances is 5 hours per domain. Creating such an extensively annotated dataset for each new domain is prohibitively expensive. $({2})$ The percentage of distinct delexicalised dialog acts between training and testing data is quite small. For example, the delexicalised dialog acts in testing is 100% covered by the training set for the E2E NLG dataset. It renders difficulties in evaluating the model's generalization ability for new domains.
Dataset: FewShotWOZ ::: FewShotWOZ.
To build a setting for more pragmatic NLG scenarios, we introduce a new dataset FewShotWOZ to better reflect real application complexity, and encourage the community to develop algorithms that are capable of generalizing with only a few domain-specific labels for each (new) domain. The dataset statistics are shown in the last column of Table TABREF23. We see that FewShotWOZ is different from the other datasets in three aspects: $({1})$ More domains. FewShotWOZ contains seven domains in total, which is larger than any existing NLG datasets. $({2})$ Less training instances. Importantly, FewShotWOZ has a much smaller number of training instances per domain, aiming to evaluate the few-shot learning ability. $({3})$ Lower training/testing overlap. FewShotWOZ has only 8.82% overlap, significantly smaller than the other datasets, which amount to more than 90% overlap. The average number of intents per instance in $\mathtt {Attraction}$/ $\mathtt {Taxi}$/ $\mathtt {Train}$ domain is 2, 1.33, and 2.05, respectively. In contrast, there is only one intent for each example in the other datasets. The NLG task defined on FewShotWOZ requires the models to learn to generalize over new compositions of intents. The details of FewShotWOZ is shown in Table TABREF26.
Dataset: FewShotWOZ ::: Collection Protocols.
We construct FewShotWOZ via re-organizing data samples from RNNLG and MultiWOZ datasets BIBREF4. For each domain in RNNLG, we first group utterances according to their delexicalised dialog acts, and keep only one utterance as the target sentence. To ensure diversity, we consider three domains from MultiWOZ: $\mathtt {Attraction}$, $\mathtt {Taxi}$, and $\mathtt {Train}$. Since MultiWOZ is a cross-domain dataset, the dialog act of an utterance may exist in multiple domains. We choose to keep utterances whose dialog act appears only in one domain. Similar delexicalising processing is applied to ensure that each dialog act has only one target utterance. Finally, to simulate the few-shot learning in practice, we randomly sample 50 training examples for each domain, except the $\mathtt {Taxi}$ domain, which has 40 examples.
Related Work ::: Pre-trained Models.
Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to adapt to various downstream tasks. The closest line of research to ours are GPT-2 BIBREF6, CTRL BIBREF15 and Grover BIBREF17. GPT-2 first investigated missive Transformer-based auto-regressive language models with large-scale text data for pre-training. After fine-tuning, GPT-2 achieves drastic improvements on several generation tasks. One drawback of GPT-2 is the lack of high-level semantic controlling ability in language generation. To alleviate this issue, CTRL BIBREF15 was introduced to train the model based on pre-defined codes such as text style, content description, and task-specific behavior, meanwhile Grover BIBREF17 was proposed to generate news articles conditioned on authors, dates Although conceptually similar to our SC-GPT, CTRL and Grover cannot be readily applied to NLG in task-oriented dialog systems, as the conditioning codes are quite different. Another controllable generation work for GPT-2 is PPLM BIBREF18, which provides a decoding scheme to guide the generation process using key-words or classifiers, without re-training the model. In this paper, we focus on pre-training an NLG model conditioned on finer-grained semantic dialog acts, which are more desirable for dialog systems.
Related Work ::: Dialog.
Various dialog systems have been developed BIBREF2, including task-oriented dialog systems such as Rasa, Microsoft Bot Framework, and Conversational Learner, and chit-chat systems such as XiaoIce BIBREF19, DialoGPT BIBREF20, Meena BIBREF21. In this paper, we focus on task-oriented systems, particularly the NLG module. With the blooming of deep learning, neural sequential models have shown powerful capability and flexibility in NLG. Extensive efforts have been made, including new architecture choices such as RNNs BIBREF22, attention RNNs BIBREF23, SC-LSTM BIBREF3 and its variants BIBREF24, BIBREF25, as well as learning objectives BIBREF26. However, they all require large amounts of annotated data to reach satisfactory performance. A more realistic scenario is to require much less labeling and improve the sample efficiency of models, This is especially important when deploying the models to new domains, where dialog acts need to be labelled from scratch. Our paper aims to formally set up such a research scenario by proposing a new dataset FewShotWOZ, and a new model SC-GPT.
Experiments
In this section, we evaluate the proposed SC-GPT on the FewShotWOZ and MultiWOZ datasets to answer two research questions: $({1})$ Is SC-GPT an effective model for strong generalization and controllability in dialog response generation? $({2})$ Does FewShotWOZ meet the goal of effectively evaluating the generalization of NLG models in the few-shot learning setting?
Experiments ::: Experimental Setup ::: Implementation details.
The model was built upon Huggingface Pytorch Transformer BIBREF27. We use GPT2-Medium with 345M parameters as the initial checkpoint, and byte pair encodings BIBREF28 for the tokenization. Linear rate scheduler with start rate as 5e-5 was used for both pre-training and fine-tuning. Adam BIBREF29 with weight decay was used to optimize the parameters. For pre-training, the model was trained with a mini-batch of 8 on an 8 Nvidia V100 machine until observing no significant progress on validation loss or up to 20 epochs, whichever is earlier. For fine-tuning on FewShotWOZ, models were trained on each domain separately with five epochs.
Experiments ::: Experimental Setup ::: Automatic metrics.
Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output.
Experiments ::: Experimental Setup ::: Human evaluation.
We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges.
Experiments ::: Experimental Setup ::: Baselines.
We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM.
Experiments ::: FewShotWOZ
Table TABREF33 reports the automatic evaluation performance of different methods on FewShotWOZ. SC-LSTM fails to learn the generation effectively in this few-shot learning setting. The generated utterances are poor in quality and suffer from inaccurate slot rendering. In addition, GPT-2 performs consistently better than SC-LSTM in all the domains. It reveals the feasibility of using a pre-trained language model for NLG, though only limited annotations are available for fine-tuning. Importantly, SC-GPT performs significantly better than GPT and SC-LSTM in terms of both BLEU and ERR. In all the domains, SC-GPT reduces the ERR to a significantly lower level, revealing its strong controllability power. This verifies the importance of pre-training on large annotated dialog data, as SC-GPT learns how to generate utterances specified by the dialog acts accurately.
Table TABREF34 shows the human assessment on FewShotWOZ. The results exhibit the same trend with automatic evaluation. SC-GPT outperforms GPT-2 and SC-LSTM significantly in both metrics, SC-GPT can better control the generation to convey information in the dialog act while maintaining good fluency. Note that the gap between SC-GPT and human annotation is still large, indicating that the proposed FewShotWOZ exhibits an under-explored research area, and provides a large space to encourage future research for improvement.
Experiments ::: MultiWOZ
The results on MultiWOZ are shown in Table TABREF42. Following BIBREF7, Entity F1 BIBREF30 is used to evaluate the entity coverage accuracy (including all slot values, days, numbers, and reference, ). Again, SC-GPT achieves the best performance on BLEU score. Note that GPT-2 performs similarly with SC-GPT on the full MultiWOZ dataset, this is because MultiWOZ contains 57k utterances, which is large enough for GPT-2 to achieve good performance. The results also confirm that with enough annotated data, conditional language model formulation performs significantly better than HDSA, a strong competitor that leverages graph/tree-structure information to encode dialog acts.
To study how SC-GPT performs with different training data sizes. We further conduct experiments with varying percentages of training data on MultiWOZ, ranging from 0.1% (50 examples) to 50%. As shown in Table TABREF43, the observations are consistent with FewShotWOZ. SC-GPT performs consistently better than GPT-2, HDSA, and SC-LSTM for a wide range of dataset sizes, and the improvement is more substantial when the fewer numbers of in-domain labels are used for fine-tuning.
Table TABREF44 shows the human assessment results on MultiWOZ. The results are consistent with the automatic evaluation. It is interesting to see that $({1})$ the gap between the new state-of-the-art method (SC-GPT ) and human performance on FewShotWOZ (as shown in Table TABREF34) is much larger than that on MultiWOZ; $({2})$ the human rating on the naturalness of SC-GPT is even higher than humans on MultiWOZ, while there is a visible gap on FewShotWOZ. These results demonstrate that FewShotWOZ presents a challenging few-shot learning setting, SG-GPT serves as a simple and strong baseline in this setting, and the combined provides a platform for researchers to develop NLG models that are able to generalize to new domains and generate semantically conditioned and fluent responses.
Experiments ::: Analysis
We perform detailed analysis to investigate SG-GPT's flexibility, controllability and generalizability. The test set is split into two subsets - seen and unseen. If a dialog act of an example appears in the training set, the example is marked as seen; otherwise, it is marked as unseen. Table TABREF48 compares different models on the seen and unseen subsets in the $\mathtt {restaurant}$ domain. SC-GPT yields higher BLEU and lower ERR, and the improvement is more significant on the unseen set. For example, SC-GPT reduces ERR to 4.96, an order of magnitude lower than SC-LSTM and only 1/3 of GPT-2. This demonstrates that SC-GPT generalizes well to novel dialog acts, and is able to precisely ground in them to compose fluent responses. This is further confirmed by the quantitative comparison in Table TABREF45, where we compare the generated utterance examples of different models. While the baseline methods prone to over-generate or miss important slots, SC-GPT can successfully generate fluent natural language utterances that share precise semantic conditions with the ground-truth references.
We further simulate the process when deploying SC-GPT for a new domain, using the examples provided in the RASA dialog toolkit . We first fine-tune SC-GPT using a few training examples (only 16 instances in this new domain), and then generate utterances based on novel dialog acts that are unseen in training data, shown in Table TABREF49. In practice, it is desirable for an NLG system to deal with an extending domain whose dialog acts change dynamically. We simulate the setting by editing the original input dialog acts, such as inserting or deleting a slot, or substituting a slot value.
Since SC-LSTM is infeasible in the setting of an extending domain, we compare SC-GPT with GPT-2. Results show that SC-GPT produces better utterances than GPT-2. SC-GPT can generate reasonably good natural language responses with different combinations of editing operations, showing its high flexibility to generalize to new dialog acts with very limited training data, and produce controllable responses.
Conclusion and Future Work
In this paper, we have made two major contributions towards developing a more pragmatic NLG module for task-oriented dialog systems: $({1})$ A new benchmark FewShotWOZ is introduced to simulate the few-shot learning scenarios with scarce labelled data in real-world applications. $({2})$ A new model SC-GPT is proposed to endow the NLG module with strong semantically controlling and generalization ability. Empirical results on both FewShotWOZ and MultiWOZ show that SC-GPT achieves the best overall performance in both automatic and human evaluations.
There are two interesting directions for future work. The first is to design mechanisms to generate more interpersonal responses which are proven to help improve user experiences BIBREF31, BIBREF19. The other is to generalize the generative pre-training idea to all four modules in the dialog system pipeline for end-to-end training. Since these four modules process information in order, one may organize their input/output as segments, and pre-train a segment-level auto-regressive model. | BLEU scores and the slot error rate (ERR) |
b37fd665dfa5fad43977069d5623f4490a979305 | b37fd665dfa5fad43977069d5623f4490a979305_0 | Q: What existing methods is SC-GPT compared to?
Text: Introduction
Task-oriented dialog systems are becoming increasingly popular, as they can assist users in various daily activities such as ticket booking and restaurant reservations. In a typical task-oriented dialog system, the Natural Language Generation (NLG) module plays a crucial role: it converts a system action (often specified in a semantic form selected by a dialog policy) into a final response in natural language. Hence, the response should be adequate to represent semantic dialog actions, and fluent to engage users' attention. As the ultimate interface to interacts with users, NLG plays a significant impact on the users' experience.
Existing methods for NLG can be broadly summarized into two major categories. $({1})$ Template-based methods require domain experts to handcraft templates for each domain, and the system fills in slot-values afterward BIBREF0, BIBREF1. Thus, the produced responses are often adequate to contain the required semantic information, but not always fluent and nature, hurting users' experiences. $({2})$ Statistical language models such as neural networks BIBREF2 learn to generate fluent responses via training from labelled corpus. One canonical model is semantically conditioned LSTM (SC-LSTM) BIBREF3, which encodes dialog acts with one-hot representations and uses it as an extra feature to inform the sentence generation process. Despite its good performance on simple domains, it requires large amounts of domain-specific annotated data which is not available for many domains in real-world applications. Even worse, this renders severe scalability issues when the number of possible combinations of dialog acts grows exponentially with the number of slots in more complex domains.
We revisit the current research benchmarks for NLG, and notice that each dialog domain is extensively labelled to favor model training. However, this is in contrast to the real-world application scenarios, where only very limited amounts of labelled data are available for new domains. To simulate such a few-shot learning setting, we have developed a new benchmark dataset, called FewShotWOZ, based on the MultiWOZ BIBREF4 and Cambridge NLG datasets BIBREF5. FewShotWOZ consists of dialog utterances from 7 domains. For each domain, we provide less than 50 labeled utterances for fine-tuning. We believe that FewShotWOZ can better inspire research to address the challenge of learning data-hungry statistical models with very limited amounts of labelled data in real-world scenarios.
To deal with the challenge of few-shot learning, we develop the SC-GPT model. SC-GPT is a multi-layer Transformer neural language model, trained in three steps: $({1})$ Pre-trained on plain text, similar to GPT-2 BIBREF6; $({2})$ Continuously pre-trained on large amounts of dialog-act labeled utterances corpora to acquire the ability of controllable generation; $({3})$ Fine-tuned for a target domain using very limited amounts of domain labels. Unlike GPT-2, SC-GPT generates semantically controlled responses that are conditioned on the given semantic form, similar to SC-LSTM but requiring much less domain labels to generalize to new domains.
In summary, our key contributions are three-fold:
A new benchmark FewShotWOZ is introduced to simulate the few-shot adaptation setting where only a handful of training data from each domain is available.
We propose a new model SC-GPT. To our best knowledge, this work is the first study of exploiting state-of-the-art pre-trained language models for NLG in task-oriented dialog systems.
On the MultiWOZ dataset, SC-GPT creates a new SOTA, outperforming previous models by 4 points in BLEU. On FewShotWOZ, SC-GPT outperforms several strong baselines such as SC-LSTM and HDSA BIBREF7, showing that SC-GPT adapts to new domain much more effectively, requiring much smaller amounts of in-domain labels. We release our code and dataset for reproducible research.
Background
A typical task-oriented spoken dialog system uses a pipeline architecture, as shown in Figure FIGREF2 (a), where each dialog turn is processed using a four-step procedure. $({1})$ Transcriptions of user’s input are first passed to the natural language understanding (NLU) module, where the user’s intention and other key information are extracted. $({2})$ This information is then formatted as the input to dialog state tracking (DST), which maintains the current state of the dialog. $({3})$ Outputs of DST are passed to the dialog policy module, which produces a dialog act based on the facts or entities retrieved from external resources (such as a database or a knowledge base). $({4})$ The dialog act emitted by the dialog policy module serves as the input to the NLG, through which a system response in natural language is generated. In this paper, we focus on the NLG component of task-oriented dialog systems, how to produce natural language responses conditioned on dialog acts.
Specifically, dialog act $$ is defined as the combination of intent $$ and slot-value pairs $\lbrace (s_i, v_i)\rbrace ^P_{i=1}$:
where $P$ is the number of pairs, which varies in different dialog acts.
Intents are usually used to distinguish different types of system actions. Typical examples include inform, request, confirm, select
Slot-value pairs indicate the category and content of the information to express in the utterance, respectively.
The goal of NLG is to translate $$ into a natural language response $= [x_1, \cdots , x_T]$, where $T$ is the sequence length. In Figure FIGREF2 (b), we show an example of the dialog act: $\textit {\texttt {confirm}~(name=Hilton, area=center)}$, and the corresponding natural language response is “Let me confirm that you are searching for Hilton in the center area”.
Semantically Conditioned GPT
We tackle this generation problem using conditional neural language models. Given training data of $N$ samples $=\lbrace (_n, _n)\rbrace _{n=1}^{N}$, our goal is to build a statistical model parameterized by $$ to characterize $p_{}(| )$. To leverage the sequential structure of response, one may further decompose the joint probability of $$ using the chain rule, casting an auto-regressive generation process as follows:
where $x_{<t}$ indicates all tokens before $t$.
Learning $$ is performed via maximizing the log-likelihood (MLE) of the conditional probabilities in (DISPLAY_FORM13) over the entire training dataset:
In this paper, we employ the Transformers BIBREF8 to parameterize the conditionals in (DISPLAY_FORM13). To enable strong generalization and controllable ability for the learned model, we propose the following three-stage procedure as the training recipe.
Semantically Conditioned GPT ::: Massive Plain Language Pre-training.
Large models trained on massive training corpus usually generalize better to new domains. Inspired by this, we inherit the GPT-2 architecture BIBREF6 as the backbone language model. GPT-2 is an auto-regressive language model that leverages 12-24 layers of masked, multi-head self-attention Transformers. GPT-2 is pre-trained on extremely massive text data OpenWebText BIBREF6. It has demonstrated superior performance on characterizing human language data distribution and knowledge transfer. Given text prompts, GPT-2 can often generate realistic sentences.
Semantically Conditioned GPT ::: Dialog-Act Controlled Pre-training.
To enable the guidance of dialog act in response generation, we propose to continuously pre-train the GPT-2 model on large amounts of annotated (dialog act, response) pairs. The pre-training dataset includes annotated training pairs from Schema-Guided Dialog corpus, MultiWOZ corpus, Frame corpus, and Facebook Multilingual Dialog Corpus. The total size of the pre-training corpus is around 400k examples.
We firstly pre-process dialog act $$ into a sequence of control codes using the following format:
Meanwhile, the output sequence $^{\prime }$ is pre-processed via appending $$ with a special start token [BOS] and an end token [EOS]. Finally, the sequentialized dialog act $^{\prime }$ is concatenated with its augmented response $^{\prime }$, and then fed into GPT-2. During training, the prediction loss is only computed for $^{\prime }$, and $^{\prime }$ provides the attended conditions. Since the dialog act represents the semantics of the generated sentences, we follow the naming convention of SC-LSTM, and term our model as Semantically Conditioned Generative Pre-training (SC-GPT). The overall architecture of SC-GPT is illustrated in Figure FIGREF12.
Semantically Conditioned GPT ::: Fine-tuning.
For a new domain, a dialog act usually contains novel intents or slot-value pairs, and annotated training samples are often limited. We fine-tune SC-GPT on limited amounts of domain-specific labels for adaptation. The fine-tuning follows the same procedure of dialog-act controlled pre-training, as described above, but uses only a few dozens of domain labels.
It is worth noticing that the above recipe has several favorable properties:
Flexibility. SC-GPT operates on a sequence of tokens without delexicalization, which means that SC-GPT does not assume a fixed one-hot or tree-structured dialog act representation vectors. Hence, it has great flexibility in extending to novel dialog acts.
Controllability. In contrast to GPT-2 that generates natural sentences without high-level semantic guidance, SC-GPT can generate sentences with adequate intent and slot-value information and maintain its fluency.
Generalizability. SC-GPT is able to generalize significantly better than SC-LSTM, due to the pre-training on massive plain text corpora and annotated dialog datasets.
Dataset: FewShotWOZ ::: Revisiting NLG Benchmarks.
The three commonly used NLG datasets in developing and evaluating task-oriented dialog systems are E2E NLG BIBREF9 BAGEL BIBREF10 and RNNLG BIBREF5, as summarized in Table TABREF23. We observe two issues from their shared statistics: $({1})$ All the datasets contain a large number of labelled training samples for each domain, ranging from hundreds to tens of thousands. However, the cost of labeling is high in practice, labeling 50 utterances is 5 hours per domain. Creating such an extensively annotated dataset for each new domain is prohibitively expensive. $({2})$ The percentage of distinct delexicalised dialog acts between training and testing data is quite small. For example, the delexicalised dialog acts in testing is 100% covered by the training set for the E2E NLG dataset. It renders difficulties in evaluating the model's generalization ability for new domains.
Dataset: FewShotWOZ ::: FewShotWOZ.
To build a setting for more pragmatic NLG scenarios, we introduce a new dataset FewShotWOZ to better reflect real application complexity, and encourage the community to develop algorithms that are capable of generalizing with only a few domain-specific labels for each (new) domain. The dataset statistics are shown in the last column of Table TABREF23. We see that FewShotWOZ is different from the other datasets in three aspects: $({1})$ More domains. FewShotWOZ contains seven domains in total, which is larger than any existing NLG datasets. $({2})$ Less training instances. Importantly, FewShotWOZ has a much smaller number of training instances per domain, aiming to evaluate the few-shot learning ability. $({3})$ Lower training/testing overlap. FewShotWOZ has only 8.82% overlap, significantly smaller than the other datasets, which amount to more than 90% overlap. The average number of intents per instance in $\mathtt {Attraction}$/ $\mathtt {Taxi}$/ $\mathtt {Train}$ domain is 2, 1.33, and 2.05, respectively. In contrast, there is only one intent for each example in the other datasets. The NLG task defined on FewShotWOZ requires the models to learn to generalize over new compositions of intents. The details of FewShotWOZ is shown in Table TABREF26.
Dataset: FewShotWOZ ::: Collection Protocols.
We construct FewShotWOZ via re-organizing data samples from RNNLG and MultiWOZ datasets BIBREF4. For each domain in RNNLG, we first group utterances according to their delexicalised dialog acts, and keep only one utterance as the target sentence. To ensure diversity, we consider three domains from MultiWOZ: $\mathtt {Attraction}$, $\mathtt {Taxi}$, and $\mathtt {Train}$. Since MultiWOZ is a cross-domain dataset, the dialog act of an utterance may exist in multiple domains. We choose to keep utterances whose dialog act appears only in one domain. Similar delexicalising processing is applied to ensure that each dialog act has only one target utterance. Finally, to simulate the few-shot learning in practice, we randomly sample 50 training examples for each domain, except the $\mathtt {Taxi}$ domain, which has 40 examples.
Related Work ::: Pre-trained Models.
Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to adapt to various downstream tasks. The closest line of research to ours are GPT-2 BIBREF6, CTRL BIBREF15 and Grover BIBREF17. GPT-2 first investigated missive Transformer-based auto-regressive language models with large-scale text data for pre-training. After fine-tuning, GPT-2 achieves drastic improvements on several generation tasks. One drawback of GPT-2 is the lack of high-level semantic controlling ability in language generation. To alleviate this issue, CTRL BIBREF15 was introduced to train the model based on pre-defined codes such as text style, content description, and task-specific behavior, meanwhile Grover BIBREF17 was proposed to generate news articles conditioned on authors, dates Although conceptually similar to our SC-GPT, CTRL and Grover cannot be readily applied to NLG in task-oriented dialog systems, as the conditioning codes are quite different. Another controllable generation work for GPT-2 is PPLM BIBREF18, which provides a decoding scheme to guide the generation process using key-words or classifiers, without re-training the model. In this paper, we focus on pre-training an NLG model conditioned on finer-grained semantic dialog acts, which are more desirable for dialog systems.
Related Work ::: Dialog.
Various dialog systems have been developed BIBREF2, including task-oriented dialog systems such as Rasa, Microsoft Bot Framework, and Conversational Learner, and chit-chat systems such as XiaoIce BIBREF19, DialoGPT BIBREF20, Meena BIBREF21. In this paper, we focus on task-oriented systems, particularly the NLG module. With the blooming of deep learning, neural sequential models have shown powerful capability and flexibility in NLG. Extensive efforts have been made, including new architecture choices such as RNNs BIBREF22, attention RNNs BIBREF23, SC-LSTM BIBREF3 and its variants BIBREF24, BIBREF25, as well as learning objectives BIBREF26. However, they all require large amounts of annotated data to reach satisfactory performance. A more realistic scenario is to require much less labeling and improve the sample efficiency of models, This is especially important when deploying the models to new domains, where dialog acts need to be labelled from scratch. Our paper aims to formally set up such a research scenario by proposing a new dataset FewShotWOZ, and a new model SC-GPT.
Experiments
In this section, we evaluate the proposed SC-GPT on the FewShotWOZ and MultiWOZ datasets to answer two research questions: $({1})$ Is SC-GPT an effective model for strong generalization and controllability in dialog response generation? $({2})$ Does FewShotWOZ meet the goal of effectively evaluating the generalization of NLG models in the few-shot learning setting?
Experiments ::: Experimental Setup ::: Implementation details.
The model was built upon Huggingface Pytorch Transformer BIBREF27. We use GPT2-Medium with 345M parameters as the initial checkpoint, and byte pair encodings BIBREF28 for the tokenization. Linear rate scheduler with start rate as 5e-5 was used for both pre-training and fine-tuning. Adam BIBREF29 with weight decay was used to optimize the parameters. For pre-training, the model was trained with a mini-batch of 8 on an 8 Nvidia V100 machine until observing no significant progress on validation loss or up to 20 epochs, whichever is earlier. For fine-tuning on FewShotWOZ, models were trained on each domain separately with five epochs.
Experiments ::: Experimental Setup ::: Automatic metrics.
Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output.
Experiments ::: Experimental Setup ::: Human evaluation.
We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges.
Experiments ::: Experimental Setup ::: Baselines.
We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM.
Experiments ::: FewShotWOZ
Table TABREF33 reports the automatic evaluation performance of different methods on FewShotWOZ. SC-LSTM fails to learn the generation effectively in this few-shot learning setting. The generated utterances are poor in quality and suffer from inaccurate slot rendering. In addition, GPT-2 performs consistently better than SC-LSTM in all the domains. It reveals the feasibility of using a pre-trained language model for NLG, though only limited annotations are available for fine-tuning. Importantly, SC-GPT performs significantly better than GPT and SC-LSTM in terms of both BLEU and ERR. In all the domains, SC-GPT reduces the ERR to a significantly lower level, revealing its strong controllability power. This verifies the importance of pre-training on large annotated dialog data, as SC-GPT learns how to generate utterances specified by the dialog acts accurately.
Table TABREF34 shows the human assessment on FewShotWOZ. The results exhibit the same trend with automatic evaluation. SC-GPT outperforms GPT-2 and SC-LSTM significantly in both metrics, SC-GPT can better control the generation to convey information in the dialog act while maintaining good fluency. Note that the gap between SC-GPT and human annotation is still large, indicating that the proposed FewShotWOZ exhibits an under-explored research area, and provides a large space to encourage future research for improvement.
Experiments ::: MultiWOZ
The results on MultiWOZ are shown in Table TABREF42. Following BIBREF7, Entity F1 BIBREF30 is used to evaluate the entity coverage accuracy (including all slot values, days, numbers, and reference, ). Again, SC-GPT achieves the best performance on BLEU score. Note that GPT-2 performs similarly with SC-GPT on the full MultiWOZ dataset, this is because MultiWOZ contains 57k utterances, which is large enough for GPT-2 to achieve good performance. The results also confirm that with enough annotated data, conditional language model formulation performs significantly better than HDSA, a strong competitor that leverages graph/tree-structure information to encode dialog acts.
To study how SC-GPT performs with different training data sizes. We further conduct experiments with varying percentages of training data on MultiWOZ, ranging from 0.1% (50 examples) to 50%. As shown in Table TABREF43, the observations are consistent with FewShotWOZ. SC-GPT performs consistently better than GPT-2, HDSA, and SC-LSTM for a wide range of dataset sizes, and the improvement is more substantial when the fewer numbers of in-domain labels are used for fine-tuning.
Table TABREF44 shows the human assessment results on MultiWOZ. The results are consistent with the automatic evaluation. It is interesting to see that $({1})$ the gap between the new state-of-the-art method (SC-GPT ) and human performance on FewShotWOZ (as shown in Table TABREF34) is much larger than that on MultiWOZ; $({2})$ the human rating on the naturalness of SC-GPT is even higher than humans on MultiWOZ, while there is a visible gap on FewShotWOZ. These results demonstrate that FewShotWOZ presents a challenging few-shot learning setting, SG-GPT serves as a simple and strong baseline in this setting, and the combined provides a platform for researchers to develop NLG models that are able to generalize to new domains and generate semantically conditioned and fluent responses.
Experiments ::: Analysis
We perform detailed analysis to investigate SG-GPT's flexibility, controllability and generalizability. The test set is split into two subsets - seen and unseen. If a dialog act of an example appears in the training set, the example is marked as seen; otherwise, it is marked as unseen. Table TABREF48 compares different models on the seen and unseen subsets in the $\mathtt {restaurant}$ domain. SC-GPT yields higher BLEU and lower ERR, and the improvement is more significant on the unseen set. For example, SC-GPT reduces ERR to 4.96, an order of magnitude lower than SC-LSTM and only 1/3 of GPT-2. This demonstrates that SC-GPT generalizes well to novel dialog acts, and is able to precisely ground in them to compose fluent responses. This is further confirmed by the quantitative comparison in Table TABREF45, where we compare the generated utterance examples of different models. While the baseline methods prone to over-generate or miss important slots, SC-GPT can successfully generate fluent natural language utterances that share precise semantic conditions with the ground-truth references.
We further simulate the process when deploying SC-GPT for a new domain, using the examples provided in the RASA dialog toolkit . We first fine-tune SC-GPT using a few training examples (only 16 instances in this new domain), and then generate utterances based on novel dialog acts that are unseen in training data, shown in Table TABREF49. In practice, it is desirable for an NLG system to deal with an extending domain whose dialog acts change dynamically. We simulate the setting by editing the original input dialog acts, such as inserting or deleting a slot, or substituting a slot value.
Since SC-LSTM is infeasible in the setting of an extending domain, we compare SC-GPT with GPT-2. Results show that SC-GPT produces better utterances than GPT-2. SC-GPT can generate reasonably good natural language responses with different combinations of editing operations, showing its high flexibility to generalize to new dialog acts with very limited training data, and produce controllable responses.
Conclusion and Future Work
In this paper, we have made two major contributions towards developing a more pragmatic NLG module for task-oriented dialog systems: $({1})$ A new benchmark FewShotWOZ is introduced to simulate the few-shot learning scenarios with scarce labelled data in real-world applications. $({2})$ A new model SC-GPT is proposed to endow the NLG module with strong semantically controlling and generalization ability. Empirical results on both FewShotWOZ and MultiWOZ show that SC-GPT achieves the best overall performance in both automatic and human evaluations.
There are two interesting directions for future work. The first is to design mechanisms to generate more interpersonal responses which are proven to help improve user experiences BIBREF31, BIBREF19. The other is to generalize the generative pre-training idea to all four modules in the dialog system pipeline for end-to-end training. Since these four modules process information in order, one may organize their input/output as segments, and pre-train a segment-level auto-regressive model. | $({1})$ SC-LSTM BIBREF3, $({2})$ GPT-2 BIBREF6 , $({3})$ HDSA BIBREF7 |
c1f4d632da78714308dc502fe4e7b16ea6f76f81 | c1f4d632da78714308dc502fe4e7b16ea6f76f81_0 | Q: Which language-pair had the better performance?
Text: Introduction
Neural machine translation (NMT) has grown rapidly in the past years BIBREF0, BIBREF1. It usually takes the form of an encoder-decoder neural network architecture in which source sentences are summarized into a vector representation by the encoder and are then decoded into target sentences by the decoder. NMT has outperformed conventional statistical machine translation (SMT) by a significant margin over the past years, benefiting from gating and attention techniques. Various models have been proposed based on different architectures such as RNN BIBREF0, CNN BIBREF2 and Transformer BIBREF1, the latter having achieved state-of-the-art performances while significantly reducing training time.
However, by considering sentence pairs separately and ignoring broader context, these models suffer from the lack of valuable contextual information, sometimes leading to inconsistency in a translated document. Adding document-level context helps to improve translation of context-dependent parts. Previous study BIBREF3 showed that such context gives substantial improvement in the handling of discourse phenomena like lexical disambiguation or co-reference resolution.
Most document-level NMT approaches focus on adding contextual information by taking into account a set of sentences surrounding the current pair BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. While giving significant improvement over the context-agnostic versions, none of these studies consider the whole document with well delimited boundaries. The majority of these approaches also rely on structural modification of the NMT model BIBREF6, BIBREF7, BIBREF8, BIBREF9. To the best of our knowledge, there is no existing work considering whole documents without structural modifications.
Contribution: We propose a preliminary study of a generic approach allowing any model to benefit from document-level information while translating sentence pairs. The core idea is to augment source data by adding document information to each sentence of a source corpus. This document information corresponds to the belonging document of a sentence and is computed prior to training, it takes every document word into account. Our approach focuses on pre-processing and consider whole documents as long as they have defined boundaries. We conduct experiments using the Transformer base model BIBREF1. For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. We obtain important improvements over the baseline and present evidences that this approach helps to resolve cross-sentence ambiguities.
Related Work
Interest in considering the whole document instead of a set of sentences preceding the current pair lies in the necessity for a human translator to account for broader context in order to keep a coherent translation. The idea of representing and using documents for a model is interesting, since the model could benefit from information located before or after the current processed sentence.
Previous work on document-level SMT started with cache based approaches, BIBREF11 suggest a conjunction of dynamic, static and topic-centered cache. More recent work tend to focus on strategies to capture context at the encoder level. Authors of BIBREF5 propose an auxiliary context source with a RNN dedicated to encode contextual information in addition to a warm-start of encoder and decoder states. They obtain significant gains over the baseline.
A first extension to attention-based neural architectures is proposed by BIBREF6, they add an encoder devoted to capture the preceding source sentence. Authors of BIBREF7 introduce a hierarchical attention network to model contextual information from previous sentences. Here the attention allows dynamic access to the context by focusing on different sentences and words. They show significant improvements over a strong NMT baseline. More recently, BIBREF9 extend Transformer architecture with an additional encoder to capture context and selectively merge sentence and context representations. They focus on co-reference resolution and obtain improvements in overall performances.
The closest approach to ours is presented by BIBREF4, they simply concatenate the previous source sentence to the one being translated. While they do not make any structural modification to the model, their method still does not take the whole document into account.
Approach
We propose to use the simplest method to estimate document embeddings. The approach is called SWEM-aver (Simple Word Embedding Model – average) BIBREF12. The embedding of a document $k$ is computed by taking the average of all its $N$ word vectors (see Eq. DISPLAY_FORM2) and therefore has the same dimension. Out of vocabulary words are ignored.
Despite being straightforward, our approach raises the need of already computed word vectors to keep consistency between word and document embeddings. Otherwise, fine-tuning embeddings as the model is training would shift them in a way that totally wipes off the connection between document and word vectors.
To address this problem, we adopt the following approach: First, we train a baseline Transformer model (noted Baseline model) from which we extract word embeddings. Then, we estimate document embeddings using the SWEM-aver method and train an enhanced model (noted Document model) benefiting from these document embeddings and the extracted word embeddings. During training, the Document model does not fine-tune its embeddings to preserve the relation between words and document vectors. It should be noted that we could directly use word embeddings extracted from another model such as Word2Vec BIBREF13, in practice we obtain better results when we get these vectors from a Transformer model. In our case, we simply extract them from the Baseline after it has been trained.
Using domain adaptation ideas BIBREF14, BIBREF15, BIBREF16, we associate a tag to each sentence of the source corpus, which represents the document information. This tag takes the form of an additional token placed at the first position in the sentence and corresponds to the belonging document of the sentence (see Table TABREF1). The model considers the tag as an additional word and replace it with the corresponding document embedding. The Baseline model is trained on a standard corpus that does not contain document tags, while the Document model is trained on corpus that contains document tags.
The proposed approach requires strong hypotheses about train and test data. The first downfall is the need for well defined document boundaries that allow to mark each sentence with its document tag. The second major downfall is the need to compute an embedding vector for each new document fed in the model, adding a preprocessing step before inference time.
Experiments
We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment.
Translation tasks are English to German, proposed in the first document-level translation task at WMT 2019 BIBREF17, English to French and French to English, following the IWSLT translation task BIBREF18.
Experiments ::: Training and test sets
Table TABREF4 describes the data used for the English-German language pair. These corpora correspond to the WMT 2019 document-level translation task. Table TABREF5 describes corpora for the English-French language pair, the same data is used for both translation directions.
For the English-German pair, only 10.4% (3.638M lines) of training data contains document boundaries. For English-French pair, we restricted the total amount of training data in order to keep 16.1% (602K lines) of document delimited corpora. To achieve this we randomly sampled 10% of the ParaCrawl V3. It means that only a fraction of the source training data contains document context. The enhanced model learns to use document information only when it is available.
All test sets contain well delimited documents, Baseline models are evaluated on standard corpora while Document models are evaluated on the same standard corpora that have been augmented with document context. We evaluate the English-German systems on newstest2017, newstest2018 and newstest2019 where documents consist of newspaper articles to keep consistency with the training data. English to French and French to English systems are evaluated over IWSLT TED tst2013, tst2014 and tst2015 where documents are transcriptions of TED conferences (see Table TABREF5).
Prior to experiments, corpora are tokenized using Moses tokenizer BIBREF19. To limit vocabulary size, we adopt the BPE subword unit approach BIBREF20, through the SentencePiece toolkit BIBREF21, with 32K rules.
Experiments ::: Training details
We use the OpenNMT framework BIBREF22 in its TensorFlow version to create and train our models. All experiments are run on a single NVIDIA V100 GPU. Since the proposed approach relies on a preprocessing step and not on structural enhancement of the model, we keep the same Transformer architecture in all experiments. Our Transformer configuration is similar to the baseline of BIBREF1 except for the size of word and document vectors that we set to $d_{model} = 1024$, these vectors are fixed during training. We use $N = 6$ as the number of encoder layers, $d_{ff} = 2048$ as the inner-layer dimensionality, $h = 8$ attention heads, $d_k = 64$ as queries and keys dimension and $Pdrop = 0.1$ as dropout probability. All experiments, including baselines, are run over 600k training steps with a batch size of approximately 3000 tokens.
For all language pairs we trained a Baseline and a Document model. The Baseline is trained on a standard parallel corpus and is not aware of document embeddings, it is blind to the context and cannot link the sentences of a document. The Document model uses extracted word embeddings from the Baseline as initialization for its word vectors and also benefits from document embeddings that are computed from the extracted word embeddings. It is trained on the same corpus as the Baseline one, but the training corpus is augmented with (see Table TABREF1) and learns to make use of the document context.
The Document model does not consider its embeddings as tunable parameters, we hypothesize that fine-tuning word and document vectors breaks the relation between them, leading to poorer results. We provide evidence of this phenomena with an additional system for the French-English language pair, noted Document+tuning (see Table TABREF7) that is identical to the Document model except that it adjusts its embeddings during training.
The evaluated models are obtained by taking the average of their last 6 checkpoints, which were written at 5000 steps intervals. All experiments are run 8 times with different seeds to ensure the statistical robustness of our results. We provide p-values that indicate the probability of observing similar or more extreme results if the Document model is actually not superior to the Baseline.
Experiments ::: Results
Table TABREF6 presents results associated to the experiments for the English to German translation task, models are evaluated on the newstest2017, neswtest2018 and newstest2019 test sets. Table TABREF7 contains results for both English to French and French to English translation tasks, models are evaluated on the tst2013, tst2014 and tst2015 test sets.
En$\rightarrow $De: The Baseline model obtained State-of-The-Art BLEU and TER results according to BIBREF23, BIBREF24. The Document system shows best results, up to 0.85 BLEU points over the Baseline on the newstest2019 corpus. It also surpassed the Baselinee by 0.18 points on the newstest2017 with strong statistical significance, and by 0.15 BLEU points on the newstest2018 but this time with no statistical evidence. These encouraging results prompted us to extend experiments to another language pair: English-French.
En$\rightarrow $Fr: The Document system obtained the best results considering all metrics on all test sets with strong statistical evidence. It surpassed the Baseline by 1.09 BLEU points and 0.85 TER points on tst2015, 0.75 BLEU points and 0.76 TER points on tst2014, and 0.48 BLEU points and 0.68 TER points on tst2013.
Fr$\rightarrow $En: Of all experiments, this language pair shows the most important improvements over the Baseline. The Document model obtained substantial gains with very strong statistical evidence on all test sets. It surpassed the Baseline model by 1.81 BLEU points and 1.02 TER points on tst2015, 1.50 BLEU points and 0.96 TER points on tst2014, and 1.29 BLEU points and 0.83 TER points on tst2013.
The Document+tuning system, which only differs from the fact that it tunes its embeddings, shows little or no improvement over the Baseline, leading us to the conclusion that the relation between word and document embeddings described by Eq. DISPLAY_FORM2 must be preserved for the model to fully benefit from document context.
Experiments ::: Manual Analysis
In this analysis we present some of the many cases that suggest the Document model can handle ambiguous situations. These examples are often isolated sentences where even a human translator could not predict the good translation without looking at the document, making it almost impossible for the Baseline model which is blind to the context. Table TABREF10 contains an extract of these interesting cases for the French-English language pair.
Translation from French to English is challenging and often requires to take the context into account. The personal pronoun "lui" can refer to a person of feminine gender, masculine gender or even an object and can therefore be translated into "her", "him" or "it". The first example in Table TABREF10 perfectly illustrate this ambiguity: the context clearly indicates that "lui" in the source sentence refers to "ma fille", which is located three sentences above, and should be translated into "her". In this case, the Baseline model predict the personal pronoun "him" while the Document model correctly predicts "her". It seems that the Baseline model does not benefit from any valuable information in the source sentence. Some might argue that the source sentence actually contains clues about the correct translation, considering that "robe à paillettes" ("sparkly dress") and "baguette magique" ("magic wand") probably refer to a little girl, but we will see that the model makes similar choices in more restricted contexts. This example is relevant mainly because the actual reference to the subject "ma fille" is made long before the source sentence.
The second example in Table TABREF10 is interesting because none of our models correctly translate the source sentence. However, we observe that the Baseline model opts for a literal translation of "je peux faire le poirier" ("I can stand on my head") into "I can do the pear" while the Document model predicts "I can wring". Even though these translations are both incorrect, we observe that the Document model makes a prediction that somehow relates to the context: a woman talking about her past disability, who has become more flexible thanks to yoga and can now twist her body.
The third case in table TABREF10 is a perfect example of isolated sentence that cannot be translated correctly with no contextual information. This example is tricky because the word "Elle" would be translated into "She" in most cases if no additional information were provided, but here it refers to "la conscience" ("consciousness") from the previous sentence and must be translated into "It". As expected the Baseline model does not make the correct guess and predicts the personal pronoun "She" while the Document model correctly predicts "It". This example present a second difficult part, the word "son" from the source sentence is ambiguous and does not, in itself, inform the translator if it must be translated into "her", "his" or "its". With contextual information we know that it refers to "[le] monde physique" ("[the] physical world") and that the correct choice is the word "its". Here the Baseline incorrectly predicts "her", possibly because of its earlier choice for "She" as the subject. The Document model makes again the correct translation.
According to our results (see Table TABREF7), the English-French language pair also benefits from document-level information but to a lesser extent. For this language pair, ambiguities about personal pronouns are less frequent. Other ambiguous phenomena like the formal mode (use of "vous" instead of "tu") appear. TableTABREF11 presents an example of this kind of situation where the word "You" from the source sentence does not indicate if the correct translation is "Vous" or "Tu". However it refers to the narrator of the story who is an old police officer. In this case, it is very likely that the use of formal mode is the correct translation. The Baseline model incorrectly predicts "Tu" and the Document model predicts "Vous".
Conclusion
In this work, we presented a preliminary study of a simple approach for document-level translation. The method allows to benefit from the whole document context at the sentence level, leading to encouraging results. In our experimental setup, we observed improvement of translation outcomes up to 0.85 BLEU points in the English to German translation task and exceeding 1 BLEU point in the English to French and French to English translation tasks. Looking at the translation outputs, we provided evidence that the approach allows NMT models to disambiguate complex situations where the context is absolutely necessary, even for a human translator.
The next step is to go further by investigating more elaborate document embedding approaches and to bring these experiments to other languages (e.g.: Asian, Arabic, Italian, Spanish, etc.). To consider a training corpus with a majority of document delimited data is also very promising. | French-English |
749a307c3736c5b06d7b605dc228d80de36cbabe | 749a307c3736c5b06d7b605dc228d80de36cbabe_0 | Q: Which datasets were used in the experiment?
Text: Introduction
Neural machine translation (NMT) has grown rapidly in the past years BIBREF0, BIBREF1. It usually takes the form of an encoder-decoder neural network architecture in which source sentences are summarized into a vector representation by the encoder and are then decoded into target sentences by the decoder. NMT has outperformed conventional statistical machine translation (SMT) by a significant margin over the past years, benefiting from gating and attention techniques. Various models have been proposed based on different architectures such as RNN BIBREF0, CNN BIBREF2 and Transformer BIBREF1, the latter having achieved state-of-the-art performances while significantly reducing training time.
However, by considering sentence pairs separately and ignoring broader context, these models suffer from the lack of valuable contextual information, sometimes leading to inconsistency in a translated document. Adding document-level context helps to improve translation of context-dependent parts. Previous study BIBREF3 showed that such context gives substantial improvement in the handling of discourse phenomena like lexical disambiguation or co-reference resolution.
Most document-level NMT approaches focus on adding contextual information by taking into account a set of sentences surrounding the current pair BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. While giving significant improvement over the context-agnostic versions, none of these studies consider the whole document with well delimited boundaries. The majority of these approaches also rely on structural modification of the NMT model BIBREF6, BIBREF7, BIBREF8, BIBREF9. To the best of our knowledge, there is no existing work considering whole documents without structural modifications.
Contribution: We propose a preliminary study of a generic approach allowing any model to benefit from document-level information while translating sentence pairs. The core idea is to augment source data by adding document information to each sentence of a source corpus. This document information corresponds to the belonging document of a sentence and is computed prior to training, it takes every document word into account. Our approach focuses on pre-processing and consider whole documents as long as they have defined boundaries. We conduct experiments using the Transformer base model BIBREF1. For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. We obtain important improvements over the baseline and present evidences that this approach helps to resolve cross-sentence ambiguities.
Related Work
Interest in considering the whole document instead of a set of sentences preceding the current pair lies in the necessity for a human translator to account for broader context in order to keep a coherent translation. The idea of representing and using documents for a model is interesting, since the model could benefit from information located before or after the current processed sentence.
Previous work on document-level SMT started with cache based approaches, BIBREF11 suggest a conjunction of dynamic, static and topic-centered cache. More recent work tend to focus on strategies to capture context at the encoder level. Authors of BIBREF5 propose an auxiliary context source with a RNN dedicated to encode contextual information in addition to a warm-start of encoder and decoder states. They obtain significant gains over the baseline.
A first extension to attention-based neural architectures is proposed by BIBREF6, they add an encoder devoted to capture the preceding source sentence. Authors of BIBREF7 introduce a hierarchical attention network to model contextual information from previous sentences. Here the attention allows dynamic access to the context by focusing on different sentences and words. They show significant improvements over a strong NMT baseline. More recently, BIBREF9 extend Transformer architecture with an additional encoder to capture context and selectively merge sentence and context representations. They focus on co-reference resolution and obtain improvements in overall performances.
The closest approach to ours is presented by BIBREF4, they simply concatenate the previous source sentence to the one being translated. While they do not make any structural modification to the model, their method still does not take the whole document into account.
Approach
We propose to use the simplest method to estimate document embeddings. The approach is called SWEM-aver (Simple Word Embedding Model – average) BIBREF12. The embedding of a document $k$ is computed by taking the average of all its $N$ word vectors (see Eq. DISPLAY_FORM2) and therefore has the same dimension. Out of vocabulary words are ignored.
Despite being straightforward, our approach raises the need of already computed word vectors to keep consistency between word and document embeddings. Otherwise, fine-tuning embeddings as the model is training would shift them in a way that totally wipes off the connection between document and word vectors.
To address this problem, we adopt the following approach: First, we train a baseline Transformer model (noted Baseline model) from which we extract word embeddings. Then, we estimate document embeddings using the SWEM-aver method and train an enhanced model (noted Document model) benefiting from these document embeddings and the extracted word embeddings. During training, the Document model does not fine-tune its embeddings to preserve the relation between words and document vectors. It should be noted that we could directly use word embeddings extracted from another model such as Word2Vec BIBREF13, in practice we obtain better results when we get these vectors from a Transformer model. In our case, we simply extract them from the Baseline after it has been trained.
Using domain adaptation ideas BIBREF14, BIBREF15, BIBREF16, we associate a tag to each sentence of the source corpus, which represents the document information. This tag takes the form of an additional token placed at the first position in the sentence and corresponds to the belonging document of the sentence (see Table TABREF1). The model considers the tag as an additional word and replace it with the corresponding document embedding. The Baseline model is trained on a standard corpus that does not contain document tags, while the Document model is trained on corpus that contains document tags.
The proposed approach requires strong hypotheses about train and test data. The first downfall is the need for well defined document boundaries that allow to mark each sentence with its document tag. The second major downfall is the need to compute an embedding vector for each new document fed in the model, adding a preprocessing step before inference time.
Experiments
We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment.
Translation tasks are English to German, proposed in the first document-level translation task at WMT 2019 BIBREF17, English to French and French to English, following the IWSLT translation task BIBREF18.
Experiments ::: Training and test sets
Table TABREF4 describes the data used for the English-German language pair. These corpora correspond to the WMT 2019 document-level translation task. Table TABREF5 describes corpora for the English-French language pair, the same data is used for both translation directions.
For the English-German pair, only 10.4% (3.638M lines) of training data contains document boundaries. For English-French pair, we restricted the total amount of training data in order to keep 16.1% (602K lines) of document delimited corpora. To achieve this we randomly sampled 10% of the ParaCrawl V3. It means that only a fraction of the source training data contains document context. The enhanced model learns to use document information only when it is available.
All test sets contain well delimited documents, Baseline models are evaluated on standard corpora while Document models are evaluated on the same standard corpora that have been augmented with document context. We evaluate the English-German systems on newstest2017, newstest2018 and newstest2019 where documents consist of newspaper articles to keep consistency with the training data. English to French and French to English systems are evaluated over IWSLT TED tst2013, tst2014 and tst2015 where documents are transcriptions of TED conferences (see Table TABREF5).
Prior to experiments, corpora are tokenized using Moses tokenizer BIBREF19. To limit vocabulary size, we adopt the BPE subword unit approach BIBREF20, through the SentencePiece toolkit BIBREF21, with 32K rules.
Experiments ::: Training details
We use the OpenNMT framework BIBREF22 in its TensorFlow version to create and train our models. All experiments are run on a single NVIDIA V100 GPU. Since the proposed approach relies on a preprocessing step and not on structural enhancement of the model, we keep the same Transformer architecture in all experiments. Our Transformer configuration is similar to the baseline of BIBREF1 except for the size of word and document vectors that we set to $d_{model} = 1024$, these vectors are fixed during training. We use $N = 6$ as the number of encoder layers, $d_{ff} = 2048$ as the inner-layer dimensionality, $h = 8$ attention heads, $d_k = 64$ as queries and keys dimension and $Pdrop = 0.1$ as dropout probability. All experiments, including baselines, are run over 600k training steps with a batch size of approximately 3000 tokens.
For all language pairs we trained a Baseline and a Document model. The Baseline is trained on a standard parallel corpus and is not aware of document embeddings, it is blind to the context and cannot link the sentences of a document. The Document model uses extracted word embeddings from the Baseline as initialization for its word vectors and also benefits from document embeddings that are computed from the extracted word embeddings. It is trained on the same corpus as the Baseline one, but the training corpus is augmented with (see Table TABREF1) and learns to make use of the document context.
The Document model does not consider its embeddings as tunable parameters, we hypothesize that fine-tuning word and document vectors breaks the relation between them, leading to poorer results. We provide evidence of this phenomena with an additional system for the French-English language pair, noted Document+tuning (see Table TABREF7) that is identical to the Document model except that it adjusts its embeddings during training.
The evaluated models are obtained by taking the average of their last 6 checkpoints, which were written at 5000 steps intervals. All experiments are run 8 times with different seeds to ensure the statistical robustness of our results. We provide p-values that indicate the probability of observing similar or more extreme results if the Document model is actually not superior to the Baseline.
Experiments ::: Results
Table TABREF6 presents results associated to the experiments for the English to German translation task, models are evaluated on the newstest2017, neswtest2018 and newstest2019 test sets. Table TABREF7 contains results for both English to French and French to English translation tasks, models are evaluated on the tst2013, tst2014 and tst2015 test sets.
En$\rightarrow $De: The Baseline model obtained State-of-The-Art BLEU and TER results according to BIBREF23, BIBREF24. The Document system shows best results, up to 0.85 BLEU points over the Baseline on the newstest2019 corpus. It also surpassed the Baselinee by 0.18 points on the newstest2017 with strong statistical significance, and by 0.15 BLEU points on the newstest2018 but this time with no statistical evidence. These encouraging results prompted us to extend experiments to another language pair: English-French.
En$\rightarrow $Fr: The Document system obtained the best results considering all metrics on all test sets with strong statistical evidence. It surpassed the Baseline by 1.09 BLEU points and 0.85 TER points on tst2015, 0.75 BLEU points and 0.76 TER points on tst2014, and 0.48 BLEU points and 0.68 TER points on tst2013.
Fr$\rightarrow $En: Of all experiments, this language pair shows the most important improvements over the Baseline. The Document model obtained substantial gains with very strong statistical evidence on all test sets. It surpassed the Baseline model by 1.81 BLEU points and 1.02 TER points on tst2015, 1.50 BLEU points and 0.96 TER points on tst2014, and 1.29 BLEU points and 0.83 TER points on tst2013.
The Document+tuning system, which only differs from the fact that it tunes its embeddings, shows little or no improvement over the Baseline, leading us to the conclusion that the relation between word and document embeddings described by Eq. DISPLAY_FORM2 must be preserved for the model to fully benefit from document context.
Experiments ::: Manual Analysis
In this analysis we present some of the many cases that suggest the Document model can handle ambiguous situations. These examples are often isolated sentences where even a human translator could not predict the good translation without looking at the document, making it almost impossible for the Baseline model which is blind to the context. Table TABREF10 contains an extract of these interesting cases for the French-English language pair.
Translation from French to English is challenging and often requires to take the context into account. The personal pronoun "lui" can refer to a person of feminine gender, masculine gender or even an object and can therefore be translated into "her", "him" or "it". The first example in Table TABREF10 perfectly illustrate this ambiguity: the context clearly indicates that "lui" in the source sentence refers to "ma fille", which is located three sentences above, and should be translated into "her". In this case, the Baseline model predict the personal pronoun "him" while the Document model correctly predicts "her". It seems that the Baseline model does not benefit from any valuable information in the source sentence. Some might argue that the source sentence actually contains clues about the correct translation, considering that "robe à paillettes" ("sparkly dress") and "baguette magique" ("magic wand") probably refer to a little girl, but we will see that the model makes similar choices in more restricted contexts. This example is relevant mainly because the actual reference to the subject "ma fille" is made long before the source sentence.
The second example in Table TABREF10 is interesting because none of our models correctly translate the source sentence. However, we observe that the Baseline model opts for a literal translation of "je peux faire le poirier" ("I can stand on my head") into "I can do the pear" while the Document model predicts "I can wring". Even though these translations are both incorrect, we observe that the Document model makes a prediction that somehow relates to the context: a woman talking about her past disability, who has become more flexible thanks to yoga and can now twist her body.
The third case in table TABREF10 is a perfect example of isolated sentence that cannot be translated correctly with no contextual information. This example is tricky because the word "Elle" would be translated into "She" in most cases if no additional information were provided, but here it refers to "la conscience" ("consciousness") from the previous sentence and must be translated into "It". As expected the Baseline model does not make the correct guess and predicts the personal pronoun "She" while the Document model correctly predicts "It". This example present a second difficult part, the word "son" from the source sentence is ambiguous and does not, in itself, inform the translator if it must be translated into "her", "his" or "its". With contextual information we know that it refers to "[le] monde physique" ("[the] physical world") and that the correct choice is the word "its". Here the Baseline incorrectly predicts "her", possibly because of its earlier choice for "She" as the subject. The Document model makes again the correct translation.
According to our results (see Table TABREF7), the English-French language pair also benefits from document-level information but to a lesser extent. For this language pair, ambiguities about personal pronouns are less frequent. Other ambiguous phenomena like the formal mode (use of "vous" instead of "tu") appear. TableTABREF11 presents an example of this kind of situation where the word "You" from the source sentence does not indicate if the correct translation is "Vous" or "Tu". However it refers to the narrator of the story who is an old police officer. In this case, it is very likely that the use of formal mode is the correct translation. The Baseline model incorrectly predicts "Tu" and the Document model predicts "Vous".
Conclusion
In this work, we presented a preliminary study of a simple approach for document-level translation. The method allows to benefit from the whole document context at the sentence level, leading to encouraging results. In our experimental setup, we observed improvement of translation outcomes up to 0.85 BLEU points in the English to German translation task and exceeding 1 BLEU point in the English to French and French to English translation tasks. Looking at the translation outputs, we provided evidence that the approach allows NMT models to disambiguate complex situations where the context is absolutely necessary, even for a human translator.
The next step is to go further by investigating more elaborate document embedding approaches and to bring these experiments to other languages (e.g.: Asian, Arabic, Italian, Spanish, etc.). To consider a training corpus with a majority of document delimited data is also very promising. | WMT 2019 parallel dataset, a restricted dataset containing the full TED corpus from MUST-C BIBREF10, sampled sentences from WMT 2019 dataset |
102de97c123bb1e247efec0f1d958f8a3a86e2f6 | 102de97c123bb1e247efec0f1d958f8a3a86e2f6_0 | Q: What evaluation metrics did they use?
Text: Introduction
Neural machine translation (NMT) has grown rapidly in the past years BIBREF0, BIBREF1. It usually takes the form of an encoder-decoder neural network architecture in which source sentences are summarized into a vector representation by the encoder and are then decoded into target sentences by the decoder. NMT has outperformed conventional statistical machine translation (SMT) by a significant margin over the past years, benefiting from gating and attention techniques. Various models have been proposed based on different architectures such as RNN BIBREF0, CNN BIBREF2 and Transformer BIBREF1, the latter having achieved state-of-the-art performances while significantly reducing training time.
However, by considering sentence pairs separately and ignoring broader context, these models suffer from the lack of valuable contextual information, sometimes leading to inconsistency in a translated document. Adding document-level context helps to improve translation of context-dependent parts. Previous study BIBREF3 showed that such context gives substantial improvement in the handling of discourse phenomena like lexical disambiguation or co-reference resolution.
Most document-level NMT approaches focus on adding contextual information by taking into account a set of sentences surrounding the current pair BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. While giving significant improvement over the context-agnostic versions, none of these studies consider the whole document with well delimited boundaries. The majority of these approaches also rely on structural modification of the NMT model BIBREF6, BIBREF7, BIBREF8, BIBREF9. To the best of our knowledge, there is no existing work considering whole documents without structural modifications.
Contribution: We propose a preliminary study of a generic approach allowing any model to benefit from document-level information while translating sentence pairs. The core idea is to augment source data by adding document information to each sentence of a source corpus. This document information corresponds to the belonging document of a sentence and is computed prior to training, it takes every document word into account. Our approach focuses on pre-processing and consider whole documents as long as they have defined boundaries. We conduct experiments using the Transformer base model BIBREF1. For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. We obtain important improvements over the baseline and present evidences that this approach helps to resolve cross-sentence ambiguities.
Related Work
Interest in considering the whole document instead of a set of sentences preceding the current pair lies in the necessity for a human translator to account for broader context in order to keep a coherent translation. The idea of representing and using documents for a model is interesting, since the model could benefit from information located before or after the current processed sentence.
Previous work on document-level SMT started with cache based approaches, BIBREF11 suggest a conjunction of dynamic, static and topic-centered cache. More recent work tend to focus on strategies to capture context at the encoder level. Authors of BIBREF5 propose an auxiliary context source with a RNN dedicated to encode contextual information in addition to a warm-start of encoder and decoder states. They obtain significant gains over the baseline.
A first extension to attention-based neural architectures is proposed by BIBREF6, they add an encoder devoted to capture the preceding source sentence. Authors of BIBREF7 introduce a hierarchical attention network to model contextual information from previous sentences. Here the attention allows dynamic access to the context by focusing on different sentences and words. They show significant improvements over a strong NMT baseline. More recently, BIBREF9 extend Transformer architecture with an additional encoder to capture context and selectively merge sentence and context representations. They focus on co-reference resolution and obtain improvements in overall performances.
The closest approach to ours is presented by BIBREF4, they simply concatenate the previous source sentence to the one being translated. While they do not make any structural modification to the model, their method still does not take the whole document into account.
Approach
We propose to use the simplest method to estimate document embeddings. The approach is called SWEM-aver (Simple Word Embedding Model – average) BIBREF12. The embedding of a document $k$ is computed by taking the average of all its $N$ word vectors (see Eq. DISPLAY_FORM2) and therefore has the same dimension. Out of vocabulary words are ignored.
Despite being straightforward, our approach raises the need of already computed word vectors to keep consistency between word and document embeddings. Otherwise, fine-tuning embeddings as the model is training would shift them in a way that totally wipes off the connection between document and word vectors.
To address this problem, we adopt the following approach: First, we train a baseline Transformer model (noted Baseline model) from which we extract word embeddings. Then, we estimate document embeddings using the SWEM-aver method and train an enhanced model (noted Document model) benefiting from these document embeddings and the extracted word embeddings. During training, the Document model does not fine-tune its embeddings to preserve the relation between words and document vectors. It should be noted that we could directly use word embeddings extracted from another model such as Word2Vec BIBREF13, in practice we obtain better results when we get these vectors from a Transformer model. In our case, we simply extract them from the Baseline after it has been trained.
Using domain adaptation ideas BIBREF14, BIBREF15, BIBREF16, we associate a tag to each sentence of the source corpus, which represents the document information. This tag takes the form of an additional token placed at the first position in the sentence and corresponds to the belonging document of the sentence (see Table TABREF1). The model considers the tag as an additional word and replace it with the corresponding document embedding. The Baseline model is trained on a standard corpus that does not contain document tags, while the Document model is trained on corpus that contains document tags.
The proposed approach requires strong hypotheses about train and test data. The first downfall is the need for well defined document boundaries that allow to mark each sentence with its document tag. The second major downfall is the need to compute an embedding vector for each new document fed in the model, adding a preprocessing step before inference time.
Experiments
We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment.
Translation tasks are English to German, proposed in the first document-level translation task at WMT 2019 BIBREF17, English to French and French to English, following the IWSLT translation task BIBREF18.
Experiments ::: Training and test sets
Table TABREF4 describes the data used for the English-German language pair. These corpora correspond to the WMT 2019 document-level translation task. Table TABREF5 describes corpora for the English-French language pair, the same data is used for both translation directions.
For the English-German pair, only 10.4% (3.638M lines) of training data contains document boundaries. For English-French pair, we restricted the total amount of training data in order to keep 16.1% (602K lines) of document delimited corpora. To achieve this we randomly sampled 10% of the ParaCrawl V3. It means that only a fraction of the source training data contains document context. The enhanced model learns to use document information only when it is available.
All test sets contain well delimited documents, Baseline models are evaluated on standard corpora while Document models are evaluated on the same standard corpora that have been augmented with document context. We evaluate the English-German systems on newstest2017, newstest2018 and newstest2019 where documents consist of newspaper articles to keep consistency with the training data. English to French and French to English systems are evaluated over IWSLT TED tst2013, tst2014 and tst2015 where documents are transcriptions of TED conferences (see Table TABREF5).
Prior to experiments, corpora are tokenized using Moses tokenizer BIBREF19. To limit vocabulary size, we adopt the BPE subword unit approach BIBREF20, through the SentencePiece toolkit BIBREF21, with 32K rules.
Experiments ::: Training details
We use the OpenNMT framework BIBREF22 in its TensorFlow version to create and train our models. All experiments are run on a single NVIDIA V100 GPU. Since the proposed approach relies on a preprocessing step and not on structural enhancement of the model, we keep the same Transformer architecture in all experiments. Our Transformer configuration is similar to the baseline of BIBREF1 except for the size of word and document vectors that we set to $d_{model} = 1024$, these vectors are fixed during training. We use $N = 6$ as the number of encoder layers, $d_{ff} = 2048$ as the inner-layer dimensionality, $h = 8$ attention heads, $d_k = 64$ as queries and keys dimension and $Pdrop = 0.1$ as dropout probability. All experiments, including baselines, are run over 600k training steps with a batch size of approximately 3000 tokens.
For all language pairs we trained a Baseline and a Document model. The Baseline is trained on a standard parallel corpus and is not aware of document embeddings, it is blind to the context and cannot link the sentences of a document. The Document model uses extracted word embeddings from the Baseline as initialization for its word vectors and also benefits from document embeddings that are computed from the extracted word embeddings. It is trained on the same corpus as the Baseline one, but the training corpus is augmented with (see Table TABREF1) and learns to make use of the document context.
The Document model does not consider its embeddings as tunable parameters, we hypothesize that fine-tuning word and document vectors breaks the relation between them, leading to poorer results. We provide evidence of this phenomena with an additional system for the French-English language pair, noted Document+tuning (see Table TABREF7) that is identical to the Document model except that it adjusts its embeddings during training.
The evaluated models are obtained by taking the average of their last 6 checkpoints, which were written at 5000 steps intervals. All experiments are run 8 times with different seeds to ensure the statistical robustness of our results. We provide p-values that indicate the probability of observing similar or more extreme results if the Document model is actually not superior to the Baseline.
Experiments ::: Results
Table TABREF6 presents results associated to the experiments for the English to German translation task, models are evaluated on the newstest2017, neswtest2018 and newstest2019 test sets. Table TABREF7 contains results for both English to French and French to English translation tasks, models are evaluated on the tst2013, tst2014 and tst2015 test sets.
En$\rightarrow $De: The Baseline model obtained State-of-The-Art BLEU and TER results according to BIBREF23, BIBREF24. The Document system shows best results, up to 0.85 BLEU points over the Baseline on the newstest2019 corpus. It also surpassed the Baselinee by 0.18 points on the newstest2017 with strong statistical significance, and by 0.15 BLEU points on the newstest2018 but this time with no statistical evidence. These encouraging results prompted us to extend experiments to another language pair: English-French.
En$\rightarrow $Fr: The Document system obtained the best results considering all metrics on all test sets with strong statistical evidence. It surpassed the Baseline by 1.09 BLEU points and 0.85 TER points on tst2015, 0.75 BLEU points and 0.76 TER points on tst2014, and 0.48 BLEU points and 0.68 TER points on tst2013.
Fr$\rightarrow $En: Of all experiments, this language pair shows the most important improvements over the Baseline. The Document model obtained substantial gains with very strong statistical evidence on all test sets. It surpassed the Baseline model by 1.81 BLEU points and 1.02 TER points on tst2015, 1.50 BLEU points and 0.96 TER points on tst2014, and 1.29 BLEU points and 0.83 TER points on tst2013.
The Document+tuning system, which only differs from the fact that it tunes its embeddings, shows little or no improvement over the Baseline, leading us to the conclusion that the relation between word and document embeddings described by Eq. DISPLAY_FORM2 must be preserved for the model to fully benefit from document context.
Experiments ::: Manual Analysis
In this analysis we present some of the many cases that suggest the Document model can handle ambiguous situations. These examples are often isolated sentences where even a human translator could not predict the good translation without looking at the document, making it almost impossible for the Baseline model which is blind to the context. Table TABREF10 contains an extract of these interesting cases for the French-English language pair.
Translation from French to English is challenging and often requires to take the context into account. The personal pronoun "lui" can refer to a person of feminine gender, masculine gender or even an object and can therefore be translated into "her", "him" or "it". The first example in Table TABREF10 perfectly illustrate this ambiguity: the context clearly indicates that "lui" in the source sentence refers to "ma fille", which is located three sentences above, and should be translated into "her". In this case, the Baseline model predict the personal pronoun "him" while the Document model correctly predicts "her". It seems that the Baseline model does not benefit from any valuable information in the source sentence. Some might argue that the source sentence actually contains clues about the correct translation, considering that "robe à paillettes" ("sparkly dress") and "baguette magique" ("magic wand") probably refer to a little girl, but we will see that the model makes similar choices in more restricted contexts. This example is relevant mainly because the actual reference to the subject "ma fille" is made long before the source sentence.
The second example in Table TABREF10 is interesting because none of our models correctly translate the source sentence. However, we observe that the Baseline model opts for a literal translation of "je peux faire le poirier" ("I can stand on my head") into "I can do the pear" while the Document model predicts "I can wring". Even though these translations are both incorrect, we observe that the Document model makes a prediction that somehow relates to the context: a woman talking about her past disability, who has become more flexible thanks to yoga and can now twist her body.
The third case in table TABREF10 is a perfect example of isolated sentence that cannot be translated correctly with no contextual information. This example is tricky because the word "Elle" would be translated into "She" in most cases if no additional information were provided, but here it refers to "la conscience" ("consciousness") from the previous sentence and must be translated into "It". As expected the Baseline model does not make the correct guess and predicts the personal pronoun "She" while the Document model correctly predicts "It". This example present a second difficult part, the word "son" from the source sentence is ambiguous and does not, in itself, inform the translator if it must be translated into "her", "his" or "its". With contextual information we know that it refers to "[le] monde physique" ("[the] physical world") and that the correct choice is the word "its". Here the Baseline incorrectly predicts "her", possibly because of its earlier choice for "She" as the subject. The Document model makes again the correct translation.
According to our results (see Table TABREF7), the English-French language pair also benefits from document-level information but to a lesser extent. For this language pair, ambiguities about personal pronouns are less frequent. Other ambiguous phenomena like the formal mode (use of "vous" instead of "tu") appear. TableTABREF11 presents an example of this kind of situation where the word "You" from the source sentence does not indicate if the correct translation is "Vous" or "Tu". However it refers to the narrator of the story who is an old police officer. In this case, it is very likely that the use of formal mode is the correct translation. The Baseline model incorrectly predicts "Tu" and the Document model predicts "Vous".
Conclusion
In this work, we presented a preliminary study of a simple approach for document-level translation. The method allows to benefit from the whole document context at the sentence level, leading to encouraging results. In our experimental setup, we observed improvement of translation outcomes up to 0.85 BLEU points in the English to German translation task and exceeding 1 BLEU point in the English to French and French to English translation tasks. Looking at the translation outputs, we provided evidence that the approach allows NMT models to disambiguate complex situations where the context is absolutely necessary, even for a human translator.
The next step is to go further by investigating more elaborate document embedding approaches and to bring these experiments to other languages (e.g.: Asian, Arabic, Italian, Spanish, etc.). To consider a training corpus with a majority of document delimited data is also very promising. | BLEU and TER scores |
3460393d6888dd34113fa0813a1b3a1514c66aa6 | 3460393d6888dd34113fa0813a1b3a1514c66aa6_0 | Q: Do they evaluate only on English datasets?
Text: Introduction and Motivation
The crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world. Today, over 1.23 million people in the United States are members of a street gang BIBREF0 , BIBREF1 , which is a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise BIBREF2 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Moreover, data from the Centers for Disease Control in the United States suggests that the victims of at least 1.3% of all gang-related homicides are merely innocent bystanders who live in gang occupied neighborhoods BIBREF3 .
Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 .
Gang members are able to post publicly on Twitter without fear of consequences because there are few tools law enforcement can use to surveil this medium BIBREF10 . Police departments across the United States instead rely on manual processes to search social media for gang member profiles and to study their posts. For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF11 . Officer training is broadly limited to understanding policies on using Twitter in investigations and best practices for data storage BIBREF12 . The safety and security of city neighborhoods can thus be improved if law enforcement were equipped with intelligent tools to study social media for gang activity.
The need for better tools for law enforcement cannot be underscored enough. Recent news reports have shown that many incidents involving gangs start on Twitter, escalate over time, and lead to an offline event that could have been prevented by an early warning. For example, the media reported on a possible connection between the death of a teenage rapper from Illinois and the final set of tweets he posted. One of his last tweets linked to a video of him shouting vulgar words at a rival gang member who, in return, replied “I'ma kill you” on social media. In a following tweet, the teenage rapper posted “im on 069”, revealing his location, and was shot dead soon after that post. Subsequent investigation revealed that the rivalry leading to his death began and was carried out entirely on social media. Other reporting has revealed how innocent bystanders have also become targets in online fights, leaving everyone in a neighborhood at risk.
This paper investigates whether gang member profiles can be identified automatically on Twitter, which can enable better surveillance of gang members on social media. Classifying Twitter profiles into particular types of users has been done in other contexts BIBREF13 , BIBREF14 , BIBREF15 , but gang member profiles pose unique challenges. For example, many Twitter profile classifiers search for contextual clues in tweets and profile descriptions BIBREF16 , but gang member profiles use a rapidly changing lexicon of keywords and phrases that often have only a local, geographic context. This is illustrated in Figure FIGREF6 , which shows the Twitter profile descriptions of two verified deceased gang members. The profile of @OsoArrogantJoJo provides evidence that he belongs to a rival gang of the Black Disciples by #BDK, a hashtag that is only known to those involved with gang culture in Chicago. @PappyNotPapi's profile mentions #PBG and our investigations revealed that this hashtag is newly founded and stands for the Pooh Bear Gang, a gang that was formerly known as the Insane Cutthroat Gangsters. Given the very local, rapidly changing lexicon of gang members on social media, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, this study proposes heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music culture. A large set of gang member profiles, obtained through a careful data collection process, is compared against non-gang member profiles to find contrasting features. Experimental results show that using these sets of features, we can build a classifier that has a low false positive rate and a promising INLINEFORM0 -score of 0.7755.
This paper is organized as follows. Section SECREF2 discusses the related literature and positions how this work differs from other related works. Section SECREF3 discusses the data collection, manual feature selection and our approach to identify gang member profiles. Section SECREF4 gives a detailed explanation for evaluation of the proposed method and the results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Gang violence is a well studied social science topic dating back to 1927 BIBREF17 . However, the notions of “Cyber-” or “Internet banging”, which is defined as “the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization” BIBREF7 , was only recently introduced BIBREF18 , BIBREF10 . Patton et al. introduced the concept of “Internet banging” and studied how social media is now being used as a tool for gang self-promotion and as a way for gang members to gain and maintain street credibility BIBREF7 . They also discussed the relationship between gang-related crime and hip-hop culture, giving examples on how hip-hop music shared on social media websites targeted at harassing rival gang members often ended up in real-world collisions among those gangs. Decker et al. and Patton et al. have also reported that street gangs perform Internet banging with social media posts of videos depicting their illegal behaviors, threats to rival gangs, and firearms BIBREF19 , BIBREF20 .
The ability to take action on these discoveries is limited by the tools available to discover gang members on social media and to analyze the content they post BIBREF18 . Recent attempts to improve our abilities include a proposed architecture for a surveillance system that can learn the structure, function, and operation of gangs through what they post on social media BIBREF10 . However, the architecture requires a set of gang member profiles for input, thus assuming that they have already been discovered. Patton et al. BIBREF20 devised a method to automatically collect tweets from a group of gang members operating in Detroit, MI. However, their approach required the profile names of the gang members to be known beforehand, and data collection was localized to a single city in the country.
This work builds upon existing methods to automatically discover gang member profiles on Twitter. This type of user profile classification problem has been explored in a diverse set of applications such as political affiliation BIBREF13 , ethnicity BIBREF13 , gender BIBREF15 , predicting brand loyalty BIBREF13 , and user occupations BIBREF16 . However, these approaches may utilize an abundance of positive examples in their training data, and only rely on a single feature type (typically, tweet text). Whereas most profile classifiers focus on a single type of feature (e.g. profile text), we consider the use of a variety of feature types, including emoji, YouTube links, and photo features.
Discovering Gang Member Profiles
This section discusses the methodology we followed to study and classify the Twitter profiles of gang members automatically. It includes a semi-automatic data collection process to discover a large set of verifiable gang member profiles, an evaluation of the tweets of gang and non-gang member posts to identify promising features, and the deployment of multiple supervised learning algorithms to perform the classification.
Data collection
Discovering gang member profiles on Twitter to build training and testing datasets is a challenging task. Past strategies to find these profiles were to search for keywords, phrases, and events that are known to be related to gang activity in a particular city a priori BIBREF10 , BIBREF20 . However, such approaches are unlikely to yield adequate data to train an automatic classifier since gang members from different geographic locations and cultures use local languages, location-specific hashtags, and share information related to activities in a local region BIBREF10 . Such region-specific tweets and profiles may be used to train a classifier to find gang members within a small region but not across the Twitterverse. To overcome these limitations, we adopted a semi-automatic workflow, illustrated in Figure FIGREF7 , to build a dataset of gang member profiles suitable for training a classifier. The steps of the workflow are:
1. Seed Term Discovery: Following the success of identifying gang member profiles from Chicago BIBREF10 , we began our data collection with discovering universal terms used by gang members. We first searched for profiles with hashtags for Chicago gangs noted in BIBREF10 , namely #BDK (Black Disciple Killers) and #GDK (Gangster Disciples Killers). Those profiles were analyzed and manually verified as explained in Step 3. Analysis of these profiles identified a small set of hashtags they all use in their profile descriptions. Searching Twitter profiles using those hashtags, we observed that gang members across the U.S. use them, thus we consider those terms to be location neutral. For example, gang members post #FreeDaGuys in their profile to support their fellow members who are in jail, #RIPDaGuys to convey the grieving for fallen gang members, and #FuckDaOpps to show their hatred towards police officers. We used these terms as keywords to discover Twitter profiles irrespective of geographical location. We used the Followerwonk Web service API and Twitter REST API to search Twitter profile descriptions by keywords #FreeDaGuys, #FreeMyNigga, #RIPDaGuys, and #FuckDaOpps. Since there are different informal ways people spell a word in social media, we also considered variations on the spelling of each keyword; for example, for #FreeDaGuys, we searched both #FreeDaGuys, and #FreeTheGuys.
2. Gang Affiliated Rappers' Twitter Profile Discovery: Finding profiles by a small set of keywords is unlikely to yield sufficient data. Thus, we sought additional gang member profiles with an observation from Patton et al. BIBREF7 that the influence of hip-hop music and culture on offline gang member activities can also be seen in their social media posts. We thus also consider the influence of hip-hop culture on Twitter by exploring the Twitter network of known gangster rappers who were murdered in 2015 due to gang-related incidents. We searched for these rapper profiles on Twitter and manually checked that the rapper was affiliated to a gang.
3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions.
4. Using Retweets to discover more profiles: From the set of verified profiles, we explored their retweet and follower networks as a way to expand the dataset. We first considered authors of tweets which were retweeted by a gang member in our seed set. In Twitter, “retweeting” is a mechanism by which a user can share someone else's tweet to their follower audience. Assuming that a user only retweets things that they believe or their audience would be interested in, it may be reasonable to assume that gang members would only be interested in sharing what other gang members have to say, and hence, the authors of gang members' retweets could also be gang members.
5. Using Followers and Followees to discover more profiles: We analyzed followers and followees of our seed gang member profiles to find more gang member profiles. A Twitter user can follow other Twitter users so that the individual will be subscribed to their tweets as a follower and they will be able to start a private conversation by sending direct messages to the individual. Motivated by the sociological concept of homophily, which claims that individuals have a tendency to associate and bond with similar others, we hypothesized that the followers and followees of Twitter profiles from the seed set may also be gang members. Manual verification of Twitter profiles collected from retweets, followers, and followees of gang members showed that a majority of those profiles are non-gang members who are either family members, hip-hop artists, women or profiles with pornographic content. To ensure that our dataset is not biased towards a specific gang or geographic location, only a limited number of profiles were collected via retweets, followers and followees.
Table TABREF8 summarizes the number of profiles manually verified as gang members from Twitter profiles collected in step 1, 2, 4 and 5. Altogether we collected 400 gang member's Twitter profiles. This is a large number compared to previous studies of gang member activities on social media that curated a maximum of 91 profiles BIBREF10 . Moreover, we believe the profiles collected represent a diverse set of gang members that are not biased toward a particular geographic area or lingo as our data collection process used location-independent terms proven to be used by gang members when they express themselves.
Data analysis
We next explore differences between gang and non-gang member Twitter usage to find promising features for classifying profiles. For this purpose, profiles of non-gang members were collected from the Twitter Streaming API. We collected a random sample of tweets and the profiles of the users who authored the tweets in the random sample. We manually verified that all Twitter profiles collected in this approach belong to non-gang members. The profiles selected were then filtered by location to remove non-U.S. profiles by reverse geo-coding the location stated in their profile description by the Google Maps API. Profiles with location descriptions that were unspecified or did not relate to a location in the U.S. were discarded. We collected 2,000 non-gang member profiles in this manner. In addition, we added 865 manually verified non-gang member profiles collected using the location neutral keywords discussed in Section SECREF3 . Introducing these profiles, which have some characteristics of gang members (such as cursing frequently or cursing at law enforcement) but are not, captures local languages used by family/friends of gang members and ordinary people in a neighborhood where gangs operate.
With the Twitter REST API, we collected the maximum number of most recent tweets that can be retrieved (3,200) along with profile descriptions and images (profile and cover photos) of every gang and non-gang member profile. The resulting dataset consists of 400 gang member Twitter profiles and 2,865 non-gang member Twitter profiles. The dataset has a total of 821,412 tweets from gang member profiles and 7,238,758 tweets from non-gang member profiles. Prior to analyzing any text content, we removed all of the seed words used to find gang member profiles, all stop words, and performed stemming across all tweets and profile descriptions.
Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification.
On Twitter, a user can give a self-description as a part of the user's profile. A comparison of the top 10 words in gang members' and non-gang members' Twitter profile descriptions is shown in Figure FIGREF21 . The first 10 words are the most frequently used words in non-gang members' profiles and the latter 10 words are the most frequently used words in gang members' profiles. Word comparison shows that gang members prefer to use curse words (nigga, fuck, shit) in their profile descriptions while non-gang members use words related to their feelings or interests (love, life, live, music, book). The terms rip and free which appear in approximately INLINEFORM0 of all gang member Twitter profiles, suggest that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members. The term gang in gang members' profile descriptions suggest that gang members like to self-identify themselves on Twitter. Such lexical features may therefore be of great importance for automatically identifying gang member profiles. We take counts of unigrams from gang and non-gang members' Twitter profile descriptions as classification features.
It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member.
Recognizing the frequency with which gang members post YouTube links on gangster rap and hip-hop, we consider the YouTube videos posted in a user's tweets as features for the classifier. In particular, for each YouTube video tweeted, we used the YouTube API to retrieve the video's description and its comments. Further analysis of YouTube data showed a difference between terms in gang members' YouTube data and non-gang members' YouTube data. For example, the top 5 terms (after stemming and stop word removal) used in YouTube videos shared by gang members are shit, like, nigga, fuck, lil while like, love, peopl, song, get are the top 5 terms in non-gang member video data. To represent a user profile based on their music interests, we generated a bag of words from the video descriptions and comments from all shared videos.
Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier.
In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier.
Learning algorithms
The unigrams of tweets, profile text, and linked YouTube video descriptions and comments, along with the distribution of emoji symbols and the profile image tags were used to train four different classification models: a Naive Bayes net, a Logistic Regression, a Random Forest, and a Support Vector Machine (SVM). These four models were chosen because they are known to perform well over text features, which is the dominant type of feature considered. The performance of the models are empirically compared to determine the most suitable classification technique for this problem. Data for the models are represented as a vector of term frequencies where the terms were collected from one or more feature sets described above.
Evaluation
We next evaluate the performance of classifiers that use the above features to discover gang member profiles on Twitter. For this purpose, we use the training set discussed in Section SECREF3 with 400 gang member profiles (the `positive'/`gang' class) and 2,865 non-gang member profiles (the `negative'/`non-gang' class). We trained and evaluated the performance of the classifiers mentioned in Section SECREF31 under a 10-fold cross validation scheme. For each of the four learning algorithms, we consider variations involving only tweet text, emoji, profile, image, or music interest (YouTube comments and video description) features, and a final variant that considers all types of features together. The classifiers that use a single feature type were intended to help us study the quality of their predictive power by itself. When building these single-feature classifiers, we filtered the training dataset based on the availability of the single feature type in the training data. For example, we only used the twitter profiles that had at least a single emoji in their tweets to train classifiers that consider emoji features. We found 3,085 such profiles out of the 3,265 profiles in the training set. When all feature types were considered, we developed two different models:
Because a Twitter profile may not have every feature type, Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. In this model, the non-occurrence of a feature is represented by `zeroing out' the feature value during model training. Model(2) represents the ideal scenario where all profiles contain every feature type. For this model, we used 1,358 training instances (42% of all training instances), out of which 172 were gang members (43% of all gang members) and 1,186 were non-gang members (41% of all non-gang members). We used version 0.17.1 of scikit-learn machine learning library to implement the classifiers.
For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' and `non-gang' classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the positive `gang' and negative `non-gang' classes separately because of class imbalance in our dataset.
Experimental results
Table TABREF37 presents the average precision, recall, and INLINEFORM0 -score over the 10 folds for the single-feature and combined feature classifiers. The table includes, in braces (`{ }'), the number of gang and non-gang profiles that contain a particular feature type, and hence the number of profiles used for the 10-fold cross validation. It is reasonable to expect that any Twitter profile is not that of a gang member, predicting a Twitter user as a non-gang member is much easier than predicting a Twitter user as a gang member. Moreover false positive classifications of the `gang' class may be detrimental to law enforcement investigations, which may go awry as they surveil an innocent person based on the classifier's suggestion. We thus believe that a small false positive rate of the `gang' class to be an especially important evaluation metric. We say that a classifier is `ideal' if it demonstrates high precision, recall, and INLINEFORM1 -score for the `gang' class while performing well on the `non-gang' class as well.
The best performing classifier that considers single features is a Random Forest model over tweet features (T), with a reasonable INLINEFORM0 -score of 0.7229 for the `gang' class. It also features the highest INLINEFORM1 -score for the `non-gang' class (0.9671). Its strong performance is intuitive given the striking differences in language as shown in Figure FIGREF14 and discussed in Section UID22 . We also noted that music features offer promising results, with an INLINEFORM2 -score of 0.6505 with a Naive Bayes classifier, as well as emoji features with an INLINEFORM3 -score of 0.6067 also achieved by a Naive Bayes classifier. However, the use of profile data and image tags by themselves yield relatively poor INLINEFORM4 -scores no matter which classifier considered. There may be two reasons for this despite the differences we observed in Section SECREF17 . First, these two feature types did not generate a large number of specific features for learning. For example, descriptions are limited to just 160 characters per profile, leading to a limited number of unigrams (in our dataset, 10 on average) that can be used to train the classifiers. Second, the profile images were tagged by a third party Web service which is not specifically designed to identify gang hand signs, drugs and guns, which are often shared by gang members. This led to a small set of image tags in their profiles that were fairly generic, i.e., the image tags in Figure FIGREF36 such as `people', `man', and `adult'.
Combining these diverse sets of features into a single classifier yields even better results. Our results for Model(1) show that the Random Forest achieves the highest INLINEFORM0 -scores for both `gang' (0.7364) and `non-gang' (0.9690) classes and yields the best precision of 0.8792, which corresponds to a low false positive rate when labeling a profile as a gang member. Despite the fact that it has lower positive recall compared to the second best performing classifier (a Random Forest trained over only tweet text features (T)), for this problem setting, we should be willing to increase the chance that a gang member will go unclassified if it means reducing the chance of applying a `gang' label to a non-gang member. When we tested Model(2), a Random Forrest classifier achieved an INLINEFORM1 -score of 0.7755 (improvement of 7.28% with respect to the best performing single feature type classifier (T)) for `gang' class with a precision of 0.8961 (improvement of 6.26% with respect to (T)) and a recall of 0.6994 (improvement of 9.26% with respect to (T)). Model(2) thus outperforms Model(1), and we expect its performance to improve with the availability of more training data with all feature types.
px
Evaluation Over Unseen Profiles
We also tested the trained classifiers using a set of Twitter profiles from a separate data collection process that may emulate the classifier's operation in a real-time setting. For this experiment, we captured real-time tweets from Los Angeles, CA and from ten South Side, Chicago neighborhoods that are known for gang-related activities BIBREF10 using the Twitter streaming API. We consider these areas with known gang presence on social media to ensure that some positive profiles would appear in our test set. We ultimately collected 24,162 Twitter profiles: 15,662 from Los Angeles, and 8,500 from Chicago. We populated data for each profile by using the 3,200 most recent tweets (the maximum that can be collected from Twitter's API) for each profile. Since the 24,162 profiles are far too many to label manually, we qualitatively study those profiles the classifier placed into the `gang' class.
We used the training dataset to train our best performing random forest classifier (which use all feature types) and tested it on the test dataset. We then analyzed the Twitter profiles that our classifier labeled as belonging to the `gang' class. Each of those profiles had several features which overlap with gang members such as displaying hand signs and weapons in their profile images or in videos posted by them, gang names or gang-related hashtags in their profile descriptions, frequent use of curse words, and the use of terms such as “my homie" to refer to self-identified gang members. Representative tweets extracted from those profiles are depicted in Figure FIGREF41 . The most frequent words found in tweets from those profiles were shit, nigga, got, bitch, go, fuck etc. and their user profiles had terms such as free, artist, shit, fuck, freedagang, and ripthefallen. They had frequently used emojis such as face with tears of joy, hundred points symbol, fire, skull, money bag, and pistol. For some profiles, it was less obvious that the classifier correctly identified a gang member. Such profiles used the same emojis and curse words commonly found in gang members profiles, but their profile picture and tweet content was not indicative of a gang affiliation. In conclusion, we find that in a real-time-like setting, the classifier to be able to extract profiles with features that strongly suggest gang affiliation. Of course, these profiles demand further investigation and extensive evidence from other sources in order to draw a concrete conclusion, especially in the context of a law enforcement investigation. We refrain from reporting any profile names or specific details about the profiles labeled as a `gang' member to comply with the applicable IRB governing this human subject research.
px
Conclusion and Future Work
This paper presented an approach to address the problem of automatically identifying gang member profiles on Twitter. Despite the challenges in developing such automated systems, mainly due to difficulties in finding online gang member profiles for developing training datasets, we proposed an approach that uses features extracted from textual descriptions, emojis, images and videos shared on Twitter (textual features extracted from images, and videos). Exploratory analysis of these types of features revealed interesting, and sometimes striking differences in the ways gang and non-gang members use Twitter. Classifiers trained over features that highlight these differences, were evaluated under 10-fold cross validation. Our best classifier achieved a promising INLINEFORM0 -score of 0.7755 over the `gang' profiles when all types of features were considered.
Future work will strengthen our training dataset by including more gang member Twitter profiles by searching for more location-independent keywords. We also plan to develop our own image classification system specifically designed to classify images found on gang member profiles. We would also like to experiment with building dictionaries that contain gang names to understand whether “having a gang name in the profile description” as a feature can improve our results. Finally, we would also like to study how can we further improve our classifier models using word embeddings BIBREF23 and social networks of known gang members.
px
Acknowledgement
We are thankful to Uday Kiran Yeda for helping us with data collection. We acknowledge partial support from the National Science Foundation (NSF) award: CNS-1513721: “Context-Aware Harassment Detection on Social Media”, National Institutes of Health (NIH) award: MH105384-01A1: “Modeling Social Behavior for Healthcare Utilization in Depression” and Grant No. 2014-PS-PSN-00006 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the U.S. Department of Justice's Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justice and Delinquency Prevention, the Office for Victims of Crime, and the SMART Office. Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice, NSF or NIH.
px | Unanswerable |
d491ee69db39ec65f0f6da9ec03450520389699a | d491ee69db39ec65f0f6da9ec03450520389699a_0 | Q: What are the differences in the use of emojis between gang member and the rest of the Twitter population?
Text: Introduction and Motivation
The crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world. Today, over 1.23 million people in the United States are members of a street gang BIBREF0 , BIBREF1 , which is a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise BIBREF2 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Moreover, data from the Centers for Disease Control in the United States suggests that the victims of at least 1.3% of all gang-related homicides are merely innocent bystanders who live in gang occupied neighborhoods BIBREF3 .
Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 .
Gang members are able to post publicly on Twitter without fear of consequences because there are few tools law enforcement can use to surveil this medium BIBREF10 . Police departments across the United States instead rely on manual processes to search social media for gang member profiles and to study their posts. For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF11 . Officer training is broadly limited to understanding policies on using Twitter in investigations and best practices for data storage BIBREF12 . The safety and security of city neighborhoods can thus be improved if law enforcement were equipped with intelligent tools to study social media for gang activity.
The need for better tools for law enforcement cannot be underscored enough. Recent news reports have shown that many incidents involving gangs start on Twitter, escalate over time, and lead to an offline event that could have been prevented by an early warning. For example, the media reported on a possible connection between the death of a teenage rapper from Illinois and the final set of tweets he posted. One of his last tweets linked to a video of him shouting vulgar words at a rival gang member who, in return, replied “I'ma kill you” on social media. In a following tweet, the teenage rapper posted “im on 069”, revealing his location, and was shot dead soon after that post. Subsequent investigation revealed that the rivalry leading to his death began and was carried out entirely on social media. Other reporting has revealed how innocent bystanders have also become targets in online fights, leaving everyone in a neighborhood at risk.
This paper investigates whether gang member profiles can be identified automatically on Twitter, which can enable better surveillance of gang members on social media. Classifying Twitter profiles into particular types of users has been done in other contexts BIBREF13 , BIBREF14 , BIBREF15 , but gang member profiles pose unique challenges. For example, many Twitter profile classifiers search for contextual clues in tweets and profile descriptions BIBREF16 , but gang member profiles use a rapidly changing lexicon of keywords and phrases that often have only a local, geographic context. This is illustrated in Figure FIGREF6 , which shows the Twitter profile descriptions of two verified deceased gang members. The profile of @OsoArrogantJoJo provides evidence that he belongs to a rival gang of the Black Disciples by #BDK, a hashtag that is only known to those involved with gang culture in Chicago. @PappyNotPapi's profile mentions #PBG and our investigations revealed that this hashtag is newly founded and stands for the Pooh Bear Gang, a gang that was formerly known as the Insane Cutthroat Gangsters. Given the very local, rapidly changing lexicon of gang members on social media, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, this study proposes heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music culture. A large set of gang member profiles, obtained through a careful data collection process, is compared against non-gang member profiles to find contrasting features. Experimental results show that using these sets of features, we can build a classifier that has a low false positive rate and a promising INLINEFORM0 -score of 0.7755.
This paper is organized as follows. Section SECREF2 discusses the related literature and positions how this work differs from other related works. Section SECREF3 discusses the data collection, manual feature selection and our approach to identify gang member profiles. Section SECREF4 gives a detailed explanation for evaluation of the proposed method and the results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Gang violence is a well studied social science topic dating back to 1927 BIBREF17 . However, the notions of “Cyber-” or “Internet banging”, which is defined as “the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization” BIBREF7 , was only recently introduced BIBREF18 , BIBREF10 . Patton et al. introduced the concept of “Internet banging” and studied how social media is now being used as a tool for gang self-promotion and as a way for gang members to gain and maintain street credibility BIBREF7 . They also discussed the relationship between gang-related crime and hip-hop culture, giving examples on how hip-hop music shared on social media websites targeted at harassing rival gang members often ended up in real-world collisions among those gangs. Decker et al. and Patton et al. have also reported that street gangs perform Internet banging with social media posts of videos depicting their illegal behaviors, threats to rival gangs, and firearms BIBREF19 , BIBREF20 .
The ability to take action on these discoveries is limited by the tools available to discover gang members on social media and to analyze the content they post BIBREF18 . Recent attempts to improve our abilities include a proposed architecture for a surveillance system that can learn the structure, function, and operation of gangs through what they post on social media BIBREF10 . However, the architecture requires a set of gang member profiles for input, thus assuming that they have already been discovered. Patton et al. BIBREF20 devised a method to automatically collect tweets from a group of gang members operating in Detroit, MI. However, their approach required the profile names of the gang members to be known beforehand, and data collection was localized to a single city in the country.
This work builds upon existing methods to automatically discover gang member profiles on Twitter. This type of user profile classification problem has been explored in a diverse set of applications such as political affiliation BIBREF13 , ethnicity BIBREF13 , gender BIBREF15 , predicting brand loyalty BIBREF13 , and user occupations BIBREF16 . However, these approaches may utilize an abundance of positive examples in their training data, and only rely on a single feature type (typically, tweet text). Whereas most profile classifiers focus on a single type of feature (e.g. profile text), we consider the use of a variety of feature types, including emoji, YouTube links, and photo features.
Discovering Gang Member Profiles
This section discusses the methodology we followed to study and classify the Twitter profiles of gang members automatically. It includes a semi-automatic data collection process to discover a large set of verifiable gang member profiles, an evaluation of the tweets of gang and non-gang member posts to identify promising features, and the deployment of multiple supervised learning algorithms to perform the classification.
Data collection
Discovering gang member profiles on Twitter to build training and testing datasets is a challenging task. Past strategies to find these profiles were to search for keywords, phrases, and events that are known to be related to gang activity in a particular city a priori BIBREF10 , BIBREF20 . However, such approaches are unlikely to yield adequate data to train an automatic classifier since gang members from different geographic locations and cultures use local languages, location-specific hashtags, and share information related to activities in a local region BIBREF10 . Such region-specific tweets and profiles may be used to train a classifier to find gang members within a small region but not across the Twitterverse. To overcome these limitations, we adopted a semi-automatic workflow, illustrated in Figure FIGREF7 , to build a dataset of gang member profiles suitable for training a classifier. The steps of the workflow are:
1. Seed Term Discovery: Following the success of identifying gang member profiles from Chicago BIBREF10 , we began our data collection with discovering universal terms used by gang members. We first searched for profiles with hashtags for Chicago gangs noted in BIBREF10 , namely #BDK (Black Disciple Killers) and #GDK (Gangster Disciples Killers). Those profiles were analyzed and manually verified as explained in Step 3. Analysis of these profiles identified a small set of hashtags they all use in their profile descriptions. Searching Twitter profiles using those hashtags, we observed that gang members across the U.S. use them, thus we consider those terms to be location neutral. For example, gang members post #FreeDaGuys in their profile to support their fellow members who are in jail, #RIPDaGuys to convey the grieving for fallen gang members, and #FuckDaOpps to show their hatred towards police officers. We used these terms as keywords to discover Twitter profiles irrespective of geographical location. We used the Followerwonk Web service API and Twitter REST API to search Twitter profile descriptions by keywords #FreeDaGuys, #FreeMyNigga, #RIPDaGuys, and #FuckDaOpps. Since there are different informal ways people spell a word in social media, we also considered variations on the spelling of each keyword; for example, for #FreeDaGuys, we searched both #FreeDaGuys, and #FreeTheGuys.
2. Gang Affiliated Rappers' Twitter Profile Discovery: Finding profiles by a small set of keywords is unlikely to yield sufficient data. Thus, we sought additional gang member profiles with an observation from Patton et al. BIBREF7 that the influence of hip-hop music and culture on offline gang member activities can also be seen in their social media posts. We thus also consider the influence of hip-hop culture on Twitter by exploring the Twitter network of known gangster rappers who were murdered in 2015 due to gang-related incidents. We searched for these rapper profiles on Twitter and manually checked that the rapper was affiliated to a gang.
3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions.
4. Using Retweets to discover more profiles: From the set of verified profiles, we explored their retweet and follower networks as a way to expand the dataset. We first considered authors of tweets which were retweeted by a gang member in our seed set. In Twitter, “retweeting” is a mechanism by which a user can share someone else's tweet to their follower audience. Assuming that a user only retweets things that they believe or their audience would be interested in, it may be reasonable to assume that gang members would only be interested in sharing what other gang members have to say, and hence, the authors of gang members' retweets could also be gang members.
5. Using Followers and Followees to discover more profiles: We analyzed followers and followees of our seed gang member profiles to find more gang member profiles. A Twitter user can follow other Twitter users so that the individual will be subscribed to their tweets as a follower and they will be able to start a private conversation by sending direct messages to the individual. Motivated by the sociological concept of homophily, which claims that individuals have a tendency to associate and bond with similar others, we hypothesized that the followers and followees of Twitter profiles from the seed set may also be gang members. Manual verification of Twitter profiles collected from retweets, followers, and followees of gang members showed that a majority of those profiles are non-gang members who are either family members, hip-hop artists, women or profiles with pornographic content. To ensure that our dataset is not biased towards a specific gang or geographic location, only a limited number of profiles were collected via retweets, followers and followees.
Table TABREF8 summarizes the number of profiles manually verified as gang members from Twitter profiles collected in step 1, 2, 4 and 5. Altogether we collected 400 gang member's Twitter profiles. This is a large number compared to previous studies of gang member activities on social media that curated a maximum of 91 profiles BIBREF10 . Moreover, we believe the profiles collected represent a diverse set of gang members that are not biased toward a particular geographic area or lingo as our data collection process used location-independent terms proven to be used by gang members when they express themselves.
Data analysis
We next explore differences between gang and non-gang member Twitter usage to find promising features for classifying profiles. For this purpose, profiles of non-gang members were collected from the Twitter Streaming API. We collected a random sample of tweets and the profiles of the users who authored the tweets in the random sample. We manually verified that all Twitter profiles collected in this approach belong to non-gang members. The profiles selected were then filtered by location to remove non-U.S. profiles by reverse geo-coding the location stated in their profile description by the Google Maps API. Profiles with location descriptions that were unspecified or did not relate to a location in the U.S. were discarded. We collected 2,000 non-gang member profiles in this manner. In addition, we added 865 manually verified non-gang member profiles collected using the location neutral keywords discussed in Section SECREF3 . Introducing these profiles, which have some characteristics of gang members (such as cursing frequently or cursing at law enforcement) but are not, captures local languages used by family/friends of gang members and ordinary people in a neighborhood where gangs operate.
With the Twitter REST API, we collected the maximum number of most recent tweets that can be retrieved (3,200) along with profile descriptions and images (profile and cover photos) of every gang and non-gang member profile. The resulting dataset consists of 400 gang member Twitter profiles and 2,865 non-gang member Twitter profiles. The dataset has a total of 821,412 tweets from gang member profiles and 7,238,758 tweets from non-gang member profiles. Prior to analyzing any text content, we removed all of the seed words used to find gang member profiles, all stop words, and performed stemming across all tweets and profile descriptions.
Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification.
On Twitter, a user can give a self-description as a part of the user's profile. A comparison of the top 10 words in gang members' and non-gang members' Twitter profile descriptions is shown in Figure FIGREF21 . The first 10 words are the most frequently used words in non-gang members' profiles and the latter 10 words are the most frequently used words in gang members' profiles. Word comparison shows that gang members prefer to use curse words (nigga, fuck, shit) in their profile descriptions while non-gang members use words related to their feelings or interests (love, life, live, music, book). The terms rip and free which appear in approximately INLINEFORM0 of all gang member Twitter profiles, suggest that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members. The term gang in gang members' profile descriptions suggest that gang members like to self-identify themselves on Twitter. Such lexical features may therefore be of great importance for automatically identifying gang member profiles. We take counts of unigrams from gang and non-gang members' Twitter profile descriptions as classification features.
It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member.
Recognizing the frequency with which gang members post YouTube links on gangster rap and hip-hop, we consider the YouTube videos posted in a user's tweets as features for the classifier. In particular, for each YouTube video tweeted, we used the YouTube API to retrieve the video's description and its comments. Further analysis of YouTube data showed a difference between terms in gang members' YouTube data and non-gang members' YouTube data. For example, the top 5 terms (after stemming and stop word removal) used in YouTube videos shared by gang members are shit, like, nigga, fuck, lil while like, love, peopl, song, get are the top 5 terms in non-gang member video data. To represent a user profile based on their music interests, we generated a bag of words from the video descriptions and comments from all shared videos.
Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier.
In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier.
Learning algorithms
The unigrams of tweets, profile text, and linked YouTube video descriptions and comments, along with the distribution of emoji symbols and the profile image tags were used to train four different classification models: a Naive Bayes net, a Logistic Regression, a Random Forest, and a Support Vector Machine (SVM). These four models were chosen because they are known to perform well over text features, which is the dominant type of feature considered. The performance of the models are empirically compared to determine the most suitable classification technique for this problem. Data for the models are represented as a vector of term frequencies where the terms were collected from one or more feature sets described above.
Evaluation
We next evaluate the performance of classifiers that use the above features to discover gang member profiles on Twitter. For this purpose, we use the training set discussed in Section SECREF3 with 400 gang member profiles (the `positive'/`gang' class) and 2,865 non-gang member profiles (the `negative'/`non-gang' class). We trained and evaluated the performance of the classifiers mentioned in Section SECREF31 under a 10-fold cross validation scheme. For each of the four learning algorithms, we consider variations involving only tweet text, emoji, profile, image, or music interest (YouTube comments and video description) features, and a final variant that considers all types of features together. The classifiers that use a single feature type were intended to help us study the quality of their predictive power by itself. When building these single-feature classifiers, we filtered the training dataset based on the availability of the single feature type in the training data. For example, we only used the twitter profiles that had at least a single emoji in their tweets to train classifiers that consider emoji features. We found 3,085 such profiles out of the 3,265 profiles in the training set. When all feature types were considered, we developed two different models:
Because a Twitter profile may not have every feature type, Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. In this model, the non-occurrence of a feature is represented by `zeroing out' the feature value during model training. Model(2) represents the ideal scenario where all profiles contain every feature type. For this model, we used 1,358 training instances (42% of all training instances), out of which 172 were gang members (43% of all gang members) and 1,186 were non-gang members (41% of all non-gang members). We used version 0.17.1 of scikit-learn machine learning library to implement the classifiers.
For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' and `non-gang' classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the positive `gang' and negative `non-gang' classes separately because of class imbalance in our dataset.
Experimental results
Table TABREF37 presents the average precision, recall, and INLINEFORM0 -score over the 10 folds for the single-feature and combined feature classifiers. The table includes, in braces (`{ }'), the number of gang and non-gang profiles that contain a particular feature type, and hence the number of profiles used for the 10-fold cross validation. It is reasonable to expect that any Twitter profile is not that of a gang member, predicting a Twitter user as a non-gang member is much easier than predicting a Twitter user as a gang member. Moreover false positive classifications of the `gang' class may be detrimental to law enforcement investigations, which may go awry as they surveil an innocent person based on the classifier's suggestion. We thus believe that a small false positive rate of the `gang' class to be an especially important evaluation metric. We say that a classifier is `ideal' if it demonstrates high precision, recall, and INLINEFORM1 -score for the `gang' class while performing well on the `non-gang' class as well.
The best performing classifier that considers single features is a Random Forest model over tweet features (T), with a reasonable INLINEFORM0 -score of 0.7229 for the `gang' class. It also features the highest INLINEFORM1 -score for the `non-gang' class (0.9671). Its strong performance is intuitive given the striking differences in language as shown in Figure FIGREF14 and discussed in Section UID22 . We also noted that music features offer promising results, with an INLINEFORM2 -score of 0.6505 with a Naive Bayes classifier, as well as emoji features with an INLINEFORM3 -score of 0.6067 also achieved by a Naive Bayes classifier. However, the use of profile data and image tags by themselves yield relatively poor INLINEFORM4 -scores no matter which classifier considered. There may be two reasons for this despite the differences we observed in Section SECREF17 . First, these two feature types did not generate a large number of specific features for learning. For example, descriptions are limited to just 160 characters per profile, leading to a limited number of unigrams (in our dataset, 10 on average) that can be used to train the classifiers. Second, the profile images were tagged by a third party Web service which is not specifically designed to identify gang hand signs, drugs and guns, which are often shared by gang members. This led to a small set of image tags in their profiles that were fairly generic, i.e., the image tags in Figure FIGREF36 such as `people', `man', and `adult'.
Combining these diverse sets of features into a single classifier yields even better results. Our results for Model(1) show that the Random Forest achieves the highest INLINEFORM0 -scores for both `gang' (0.7364) and `non-gang' (0.9690) classes and yields the best precision of 0.8792, which corresponds to a low false positive rate when labeling a profile as a gang member. Despite the fact that it has lower positive recall compared to the second best performing classifier (a Random Forest trained over only tweet text features (T)), for this problem setting, we should be willing to increase the chance that a gang member will go unclassified if it means reducing the chance of applying a `gang' label to a non-gang member. When we tested Model(2), a Random Forrest classifier achieved an INLINEFORM1 -score of 0.7755 (improvement of 7.28% with respect to the best performing single feature type classifier (T)) for `gang' class with a precision of 0.8961 (improvement of 6.26% with respect to (T)) and a recall of 0.6994 (improvement of 9.26% with respect to (T)). Model(2) thus outperforms Model(1), and we expect its performance to improve with the availability of more training data with all feature types.
px
Evaluation Over Unseen Profiles
We also tested the trained classifiers using a set of Twitter profiles from a separate data collection process that may emulate the classifier's operation in a real-time setting. For this experiment, we captured real-time tweets from Los Angeles, CA and from ten South Side, Chicago neighborhoods that are known for gang-related activities BIBREF10 using the Twitter streaming API. We consider these areas with known gang presence on social media to ensure that some positive profiles would appear in our test set. We ultimately collected 24,162 Twitter profiles: 15,662 from Los Angeles, and 8,500 from Chicago. We populated data for each profile by using the 3,200 most recent tweets (the maximum that can be collected from Twitter's API) for each profile. Since the 24,162 profiles are far too many to label manually, we qualitatively study those profiles the classifier placed into the `gang' class.
We used the training dataset to train our best performing random forest classifier (which use all feature types) and tested it on the test dataset. We then analyzed the Twitter profiles that our classifier labeled as belonging to the `gang' class. Each of those profiles had several features which overlap with gang members such as displaying hand signs and weapons in their profile images or in videos posted by them, gang names or gang-related hashtags in their profile descriptions, frequent use of curse words, and the use of terms such as “my homie" to refer to self-identified gang members. Representative tweets extracted from those profiles are depicted in Figure FIGREF41 . The most frequent words found in tweets from those profiles were shit, nigga, got, bitch, go, fuck etc. and their user profiles had terms such as free, artist, shit, fuck, freedagang, and ripthefallen. They had frequently used emojis such as face with tears of joy, hundred points symbol, fire, skull, money bag, and pistol. For some profiles, it was less obvious that the classifier correctly identified a gang member. Such profiles used the same emojis and curse words commonly found in gang members profiles, but their profile picture and tweet content was not indicative of a gang affiliation. In conclusion, we find that in a real-time-like setting, the classifier to be able to extract profiles with features that strongly suggest gang affiliation. Of course, these profiles demand further investigation and extensive evidence from other sources in order to draw a concrete conclusion, especially in the context of a law enforcement investigation. We refrain from reporting any profile names or specific details about the profiles labeled as a `gang' member to comply with the applicable IRB governing this human subject research.
px
Conclusion and Future Work
This paper presented an approach to address the problem of automatically identifying gang member profiles on Twitter. Despite the challenges in developing such automated systems, mainly due to difficulties in finding online gang member profiles for developing training datasets, we proposed an approach that uses features extracted from textual descriptions, emojis, images and videos shared on Twitter (textual features extracted from images, and videos). Exploratory analysis of these types of features revealed interesting, and sometimes striking differences in the ways gang and non-gang members use Twitter. Classifiers trained over features that highlight these differences, were evaluated under 10-fold cross validation. Our best classifier achieved a promising INLINEFORM0 -score of 0.7755 over the `gang' profiles when all types of features were considered.
Future work will strengthen our training dataset by including more gang member Twitter profiles by searching for more location-independent keywords. We also plan to develop our own image classification system specifically designed to classify images found on gang member profiles. We would also like to experiment with building dictionaries that contain gang names to understand whether “having a gang name in the profile description” as a feature can improve our results. Finally, we would also like to study how can we further improve our classifier models using word embeddings BIBREF23 and social networks of known gang members.
px
Acknowledgement
We are thankful to Uday Kiran Yeda for helping us with data collection. We acknowledge partial support from the National Science Foundation (NSF) award: CNS-1513721: “Context-Aware Harassment Detection on Social Media”, National Institutes of Health (NIH) award: MH105384-01A1: “Modeling Social Behavior for Healthcare Utilization in Depression” and Grant No. 2014-PS-PSN-00006 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the U.S. Department of Justice's Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justice and Delinquency Prevention, the Office for Victims of Crime, and the SMART Office. Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice, NSF or NIH.
px | 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them, gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior |
d3839c7acee4f9c8db0a4a475214a8dcbd0bc26f | d3839c7acee4f9c8db0a4a475214a8dcbd0bc26f_0 | Q: What are the differences in the use of YouTube links between gang member and the rest of the Twitter population?
Text: Introduction and Motivation
The crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world. Today, over 1.23 million people in the United States are members of a street gang BIBREF0 , BIBREF1 , which is a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise BIBREF2 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Moreover, data from the Centers for Disease Control in the United States suggests that the victims of at least 1.3% of all gang-related homicides are merely innocent bystanders who live in gang occupied neighborhoods BIBREF3 .
Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 .
Gang members are able to post publicly on Twitter without fear of consequences because there are few tools law enforcement can use to surveil this medium BIBREF10 . Police departments across the United States instead rely on manual processes to search social media for gang member profiles and to study their posts. For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF11 . Officer training is broadly limited to understanding policies on using Twitter in investigations and best practices for data storage BIBREF12 . The safety and security of city neighborhoods can thus be improved if law enforcement were equipped with intelligent tools to study social media for gang activity.
The need for better tools for law enforcement cannot be underscored enough. Recent news reports have shown that many incidents involving gangs start on Twitter, escalate over time, and lead to an offline event that could have been prevented by an early warning. For example, the media reported on a possible connection between the death of a teenage rapper from Illinois and the final set of tweets he posted. One of his last tweets linked to a video of him shouting vulgar words at a rival gang member who, in return, replied “I'ma kill you” on social media. In a following tweet, the teenage rapper posted “im on 069”, revealing his location, and was shot dead soon after that post. Subsequent investigation revealed that the rivalry leading to his death began and was carried out entirely on social media. Other reporting has revealed how innocent bystanders have also become targets in online fights, leaving everyone in a neighborhood at risk.
This paper investigates whether gang member profiles can be identified automatically on Twitter, which can enable better surveillance of gang members on social media. Classifying Twitter profiles into particular types of users has been done in other contexts BIBREF13 , BIBREF14 , BIBREF15 , but gang member profiles pose unique challenges. For example, many Twitter profile classifiers search for contextual clues in tweets and profile descriptions BIBREF16 , but gang member profiles use a rapidly changing lexicon of keywords and phrases that often have only a local, geographic context. This is illustrated in Figure FIGREF6 , which shows the Twitter profile descriptions of two verified deceased gang members. The profile of @OsoArrogantJoJo provides evidence that he belongs to a rival gang of the Black Disciples by #BDK, a hashtag that is only known to those involved with gang culture in Chicago. @PappyNotPapi's profile mentions #PBG and our investigations revealed that this hashtag is newly founded and stands for the Pooh Bear Gang, a gang that was formerly known as the Insane Cutthroat Gangsters. Given the very local, rapidly changing lexicon of gang members on social media, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, this study proposes heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music culture. A large set of gang member profiles, obtained through a careful data collection process, is compared against non-gang member profiles to find contrasting features. Experimental results show that using these sets of features, we can build a classifier that has a low false positive rate and a promising INLINEFORM0 -score of 0.7755.
This paper is organized as follows. Section SECREF2 discusses the related literature and positions how this work differs from other related works. Section SECREF3 discusses the data collection, manual feature selection and our approach to identify gang member profiles. Section SECREF4 gives a detailed explanation for evaluation of the proposed method and the results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Gang violence is a well studied social science topic dating back to 1927 BIBREF17 . However, the notions of “Cyber-” or “Internet banging”, which is defined as “the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization” BIBREF7 , was only recently introduced BIBREF18 , BIBREF10 . Patton et al. introduced the concept of “Internet banging” and studied how social media is now being used as a tool for gang self-promotion and as a way for gang members to gain and maintain street credibility BIBREF7 . They also discussed the relationship between gang-related crime and hip-hop culture, giving examples on how hip-hop music shared on social media websites targeted at harassing rival gang members often ended up in real-world collisions among those gangs. Decker et al. and Patton et al. have also reported that street gangs perform Internet banging with social media posts of videos depicting their illegal behaviors, threats to rival gangs, and firearms BIBREF19 , BIBREF20 .
The ability to take action on these discoveries is limited by the tools available to discover gang members on social media and to analyze the content they post BIBREF18 . Recent attempts to improve our abilities include a proposed architecture for a surveillance system that can learn the structure, function, and operation of gangs through what they post on social media BIBREF10 . However, the architecture requires a set of gang member profiles for input, thus assuming that they have already been discovered. Patton et al. BIBREF20 devised a method to automatically collect tweets from a group of gang members operating in Detroit, MI. However, their approach required the profile names of the gang members to be known beforehand, and data collection was localized to a single city in the country.
This work builds upon existing methods to automatically discover gang member profiles on Twitter. This type of user profile classification problem has been explored in a diverse set of applications such as political affiliation BIBREF13 , ethnicity BIBREF13 , gender BIBREF15 , predicting brand loyalty BIBREF13 , and user occupations BIBREF16 . However, these approaches may utilize an abundance of positive examples in their training data, and only rely on a single feature type (typically, tweet text). Whereas most profile classifiers focus on a single type of feature (e.g. profile text), we consider the use of a variety of feature types, including emoji, YouTube links, and photo features.
Discovering Gang Member Profiles
This section discusses the methodology we followed to study and classify the Twitter profiles of gang members automatically. It includes a semi-automatic data collection process to discover a large set of verifiable gang member profiles, an evaluation of the tweets of gang and non-gang member posts to identify promising features, and the deployment of multiple supervised learning algorithms to perform the classification.
Data collection
Discovering gang member profiles on Twitter to build training and testing datasets is a challenging task. Past strategies to find these profiles were to search for keywords, phrases, and events that are known to be related to gang activity in a particular city a priori BIBREF10 , BIBREF20 . However, such approaches are unlikely to yield adequate data to train an automatic classifier since gang members from different geographic locations and cultures use local languages, location-specific hashtags, and share information related to activities in a local region BIBREF10 . Such region-specific tweets and profiles may be used to train a classifier to find gang members within a small region but not across the Twitterverse. To overcome these limitations, we adopted a semi-automatic workflow, illustrated in Figure FIGREF7 , to build a dataset of gang member profiles suitable for training a classifier. The steps of the workflow are:
1. Seed Term Discovery: Following the success of identifying gang member profiles from Chicago BIBREF10 , we began our data collection with discovering universal terms used by gang members. We first searched for profiles with hashtags for Chicago gangs noted in BIBREF10 , namely #BDK (Black Disciple Killers) and #GDK (Gangster Disciples Killers). Those profiles were analyzed and manually verified as explained in Step 3. Analysis of these profiles identified a small set of hashtags they all use in their profile descriptions. Searching Twitter profiles using those hashtags, we observed that gang members across the U.S. use them, thus we consider those terms to be location neutral. For example, gang members post #FreeDaGuys in their profile to support their fellow members who are in jail, #RIPDaGuys to convey the grieving for fallen gang members, and #FuckDaOpps to show their hatred towards police officers. We used these terms as keywords to discover Twitter profiles irrespective of geographical location. We used the Followerwonk Web service API and Twitter REST API to search Twitter profile descriptions by keywords #FreeDaGuys, #FreeMyNigga, #RIPDaGuys, and #FuckDaOpps. Since there are different informal ways people spell a word in social media, we also considered variations on the spelling of each keyword; for example, for #FreeDaGuys, we searched both #FreeDaGuys, and #FreeTheGuys.
2. Gang Affiliated Rappers' Twitter Profile Discovery: Finding profiles by a small set of keywords is unlikely to yield sufficient data. Thus, we sought additional gang member profiles with an observation from Patton et al. BIBREF7 that the influence of hip-hop music and culture on offline gang member activities can also be seen in their social media posts. We thus also consider the influence of hip-hop culture on Twitter by exploring the Twitter network of known gangster rappers who were murdered in 2015 due to gang-related incidents. We searched for these rapper profiles on Twitter and manually checked that the rapper was affiliated to a gang.
3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions.
4. Using Retweets to discover more profiles: From the set of verified profiles, we explored their retweet and follower networks as a way to expand the dataset. We first considered authors of tweets which were retweeted by a gang member in our seed set. In Twitter, “retweeting” is a mechanism by which a user can share someone else's tweet to their follower audience. Assuming that a user only retweets things that they believe or their audience would be interested in, it may be reasonable to assume that gang members would only be interested in sharing what other gang members have to say, and hence, the authors of gang members' retweets could also be gang members.
5. Using Followers and Followees to discover more profiles: We analyzed followers and followees of our seed gang member profiles to find more gang member profiles. A Twitter user can follow other Twitter users so that the individual will be subscribed to their tweets as a follower and they will be able to start a private conversation by sending direct messages to the individual. Motivated by the sociological concept of homophily, which claims that individuals have a tendency to associate and bond with similar others, we hypothesized that the followers and followees of Twitter profiles from the seed set may also be gang members. Manual verification of Twitter profiles collected from retweets, followers, and followees of gang members showed that a majority of those profiles are non-gang members who are either family members, hip-hop artists, women or profiles with pornographic content. To ensure that our dataset is not biased towards a specific gang or geographic location, only a limited number of profiles were collected via retweets, followers and followees.
Table TABREF8 summarizes the number of profiles manually verified as gang members from Twitter profiles collected in step 1, 2, 4 and 5. Altogether we collected 400 gang member's Twitter profiles. This is a large number compared to previous studies of gang member activities on social media that curated a maximum of 91 profiles BIBREF10 . Moreover, we believe the profiles collected represent a diverse set of gang members that are not biased toward a particular geographic area or lingo as our data collection process used location-independent terms proven to be used by gang members when they express themselves.
Data analysis
We next explore differences between gang and non-gang member Twitter usage to find promising features for classifying profiles. For this purpose, profiles of non-gang members were collected from the Twitter Streaming API. We collected a random sample of tweets and the profiles of the users who authored the tweets in the random sample. We manually verified that all Twitter profiles collected in this approach belong to non-gang members. The profiles selected were then filtered by location to remove non-U.S. profiles by reverse geo-coding the location stated in their profile description by the Google Maps API. Profiles with location descriptions that were unspecified or did not relate to a location in the U.S. were discarded. We collected 2,000 non-gang member profiles in this manner. In addition, we added 865 manually verified non-gang member profiles collected using the location neutral keywords discussed in Section SECREF3 . Introducing these profiles, which have some characteristics of gang members (such as cursing frequently or cursing at law enforcement) but are not, captures local languages used by family/friends of gang members and ordinary people in a neighborhood where gangs operate.
With the Twitter REST API, we collected the maximum number of most recent tweets that can be retrieved (3,200) along with profile descriptions and images (profile and cover photos) of every gang and non-gang member profile. The resulting dataset consists of 400 gang member Twitter profiles and 2,865 non-gang member Twitter profiles. The dataset has a total of 821,412 tweets from gang member profiles and 7,238,758 tweets from non-gang member profiles. Prior to analyzing any text content, we removed all of the seed words used to find gang member profiles, all stop words, and performed stemming across all tweets and profile descriptions.
Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification.
On Twitter, a user can give a self-description as a part of the user's profile. A comparison of the top 10 words in gang members' and non-gang members' Twitter profile descriptions is shown in Figure FIGREF21 . The first 10 words are the most frequently used words in non-gang members' profiles and the latter 10 words are the most frequently used words in gang members' profiles. Word comparison shows that gang members prefer to use curse words (nigga, fuck, shit) in their profile descriptions while non-gang members use words related to their feelings or interests (love, life, live, music, book). The terms rip and free which appear in approximately INLINEFORM0 of all gang member Twitter profiles, suggest that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members. The term gang in gang members' profile descriptions suggest that gang members like to self-identify themselves on Twitter. Such lexical features may therefore be of great importance for automatically identifying gang member profiles. We take counts of unigrams from gang and non-gang members' Twitter profile descriptions as classification features.
It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member.
Recognizing the frequency with which gang members post YouTube links on gangster rap and hip-hop, we consider the YouTube videos posted in a user's tweets as features for the classifier. In particular, for each YouTube video tweeted, we used the YouTube API to retrieve the video's description and its comments. Further analysis of YouTube data showed a difference between terms in gang members' YouTube data and non-gang members' YouTube data. For example, the top 5 terms (after stemming and stop word removal) used in YouTube videos shared by gang members are shit, like, nigga, fuck, lil while like, love, peopl, song, get are the top 5 terms in non-gang member video data. To represent a user profile based on their music interests, we generated a bag of words from the video descriptions and comments from all shared videos.
Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier.
In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier.
Learning algorithms
The unigrams of tweets, profile text, and linked YouTube video descriptions and comments, along with the distribution of emoji symbols and the profile image tags were used to train four different classification models: a Naive Bayes net, a Logistic Regression, a Random Forest, and a Support Vector Machine (SVM). These four models were chosen because they are known to perform well over text features, which is the dominant type of feature considered. The performance of the models are empirically compared to determine the most suitable classification technique for this problem. Data for the models are represented as a vector of term frequencies where the terms were collected from one or more feature sets described above.
Evaluation
We next evaluate the performance of classifiers that use the above features to discover gang member profiles on Twitter. For this purpose, we use the training set discussed in Section SECREF3 with 400 gang member profiles (the `positive'/`gang' class) and 2,865 non-gang member profiles (the `negative'/`non-gang' class). We trained and evaluated the performance of the classifiers mentioned in Section SECREF31 under a 10-fold cross validation scheme. For each of the four learning algorithms, we consider variations involving only tweet text, emoji, profile, image, or music interest (YouTube comments and video description) features, and a final variant that considers all types of features together. The classifiers that use a single feature type were intended to help us study the quality of their predictive power by itself. When building these single-feature classifiers, we filtered the training dataset based on the availability of the single feature type in the training data. For example, we only used the twitter profiles that had at least a single emoji in their tweets to train classifiers that consider emoji features. We found 3,085 such profiles out of the 3,265 profiles in the training set. When all feature types were considered, we developed two different models:
Because a Twitter profile may not have every feature type, Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. In this model, the non-occurrence of a feature is represented by `zeroing out' the feature value during model training. Model(2) represents the ideal scenario where all profiles contain every feature type. For this model, we used 1,358 training instances (42% of all training instances), out of which 172 were gang members (43% of all gang members) and 1,186 were non-gang members (41% of all non-gang members). We used version 0.17.1 of scikit-learn machine learning library to implement the classifiers.
For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' and `non-gang' classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the positive `gang' and negative `non-gang' classes separately because of class imbalance in our dataset.
Experimental results
Table TABREF37 presents the average precision, recall, and INLINEFORM0 -score over the 10 folds for the single-feature and combined feature classifiers. The table includes, in braces (`{ }'), the number of gang and non-gang profiles that contain a particular feature type, and hence the number of profiles used for the 10-fold cross validation. It is reasonable to expect that any Twitter profile is not that of a gang member, predicting a Twitter user as a non-gang member is much easier than predicting a Twitter user as a gang member. Moreover false positive classifications of the `gang' class may be detrimental to law enforcement investigations, which may go awry as they surveil an innocent person based on the classifier's suggestion. We thus believe that a small false positive rate of the `gang' class to be an especially important evaluation metric. We say that a classifier is `ideal' if it demonstrates high precision, recall, and INLINEFORM1 -score for the `gang' class while performing well on the `non-gang' class as well.
The best performing classifier that considers single features is a Random Forest model over tweet features (T), with a reasonable INLINEFORM0 -score of 0.7229 for the `gang' class. It also features the highest INLINEFORM1 -score for the `non-gang' class (0.9671). Its strong performance is intuitive given the striking differences in language as shown in Figure FIGREF14 and discussed in Section UID22 . We also noted that music features offer promising results, with an INLINEFORM2 -score of 0.6505 with a Naive Bayes classifier, as well as emoji features with an INLINEFORM3 -score of 0.6067 also achieved by a Naive Bayes classifier. However, the use of profile data and image tags by themselves yield relatively poor INLINEFORM4 -scores no matter which classifier considered. There may be two reasons for this despite the differences we observed in Section SECREF17 . First, these two feature types did not generate a large number of specific features for learning. For example, descriptions are limited to just 160 characters per profile, leading to a limited number of unigrams (in our dataset, 10 on average) that can be used to train the classifiers. Second, the profile images were tagged by a third party Web service which is not specifically designed to identify gang hand signs, drugs and guns, which are often shared by gang members. This led to a small set of image tags in their profiles that were fairly generic, i.e., the image tags in Figure FIGREF36 such as `people', `man', and `adult'.
Combining these diverse sets of features into a single classifier yields even better results. Our results for Model(1) show that the Random Forest achieves the highest INLINEFORM0 -scores for both `gang' (0.7364) and `non-gang' (0.9690) classes and yields the best precision of 0.8792, which corresponds to a low false positive rate when labeling a profile as a gang member. Despite the fact that it has lower positive recall compared to the second best performing classifier (a Random Forest trained over only tweet text features (T)), for this problem setting, we should be willing to increase the chance that a gang member will go unclassified if it means reducing the chance of applying a `gang' label to a non-gang member. When we tested Model(2), a Random Forrest classifier achieved an INLINEFORM1 -score of 0.7755 (improvement of 7.28% with respect to the best performing single feature type classifier (T)) for `gang' class with a precision of 0.8961 (improvement of 6.26% with respect to (T)) and a recall of 0.6994 (improvement of 9.26% with respect to (T)). Model(2) thus outperforms Model(1), and we expect its performance to improve with the availability of more training data with all feature types.
px
Evaluation Over Unseen Profiles
We also tested the trained classifiers using a set of Twitter profiles from a separate data collection process that may emulate the classifier's operation in a real-time setting. For this experiment, we captured real-time tweets from Los Angeles, CA and from ten South Side, Chicago neighborhoods that are known for gang-related activities BIBREF10 using the Twitter streaming API. We consider these areas with known gang presence on social media to ensure that some positive profiles would appear in our test set. We ultimately collected 24,162 Twitter profiles: 15,662 from Los Angeles, and 8,500 from Chicago. We populated data for each profile by using the 3,200 most recent tweets (the maximum that can be collected from Twitter's API) for each profile. Since the 24,162 profiles are far too many to label manually, we qualitatively study those profiles the classifier placed into the `gang' class.
We used the training dataset to train our best performing random forest classifier (which use all feature types) and tested it on the test dataset. We then analyzed the Twitter profiles that our classifier labeled as belonging to the `gang' class. Each of those profiles had several features which overlap with gang members such as displaying hand signs and weapons in their profile images or in videos posted by them, gang names or gang-related hashtags in their profile descriptions, frequent use of curse words, and the use of terms such as “my homie" to refer to self-identified gang members. Representative tweets extracted from those profiles are depicted in Figure FIGREF41 . The most frequent words found in tweets from those profiles were shit, nigga, got, bitch, go, fuck etc. and their user profiles had terms such as free, artist, shit, fuck, freedagang, and ripthefallen. They had frequently used emojis such as face with tears of joy, hundred points symbol, fire, skull, money bag, and pistol. For some profiles, it was less obvious that the classifier correctly identified a gang member. Such profiles used the same emojis and curse words commonly found in gang members profiles, but their profile picture and tweet content was not indicative of a gang affiliation. In conclusion, we find that in a real-time-like setting, the classifier to be able to extract profiles with features that strongly suggest gang affiliation. Of course, these profiles demand further investigation and extensive evidence from other sources in order to draw a concrete conclusion, especially in the context of a law enforcement investigation. We refrain from reporting any profile names or specific details about the profiles labeled as a `gang' member to comply with the applicable IRB governing this human subject research.
px
Conclusion and Future Work
This paper presented an approach to address the problem of automatically identifying gang member profiles on Twitter. Despite the challenges in developing such automated systems, mainly due to difficulties in finding online gang member profiles for developing training datasets, we proposed an approach that uses features extracted from textual descriptions, emojis, images and videos shared on Twitter (textual features extracted from images, and videos). Exploratory analysis of these types of features revealed interesting, and sometimes striking differences in the ways gang and non-gang members use Twitter. Classifiers trained over features that highlight these differences, were evaluated under 10-fold cross validation. Our best classifier achieved a promising INLINEFORM0 -score of 0.7755 over the `gang' profiles when all types of features were considered.
Future work will strengthen our training dataset by including more gang member Twitter profiles by searching for more location-independent keywords. We also plan to develop our own image classification system specifically designed to classify images found on gang member profiles. We would also like to experiment with building dictionaries that contain gang names to understand whether “having a gang name in the profile description” as a feature can improve our results. Finally, we would also like to study how can we further improve our classifier models using word embeddings BIBREF23 and social networks of known gang members.
px
Acknowledgement
We are thankful to Uday Kiran Yeda for helping us with data collection. We acknowledge partial support from the National Science Foundation (NSF) award: CNS-1513721: “Context-Aware Harassment Detection on Social Media”, National Institutes of Health (NIH) award: MH105384-01A1: “Modeling Social Behavior for Healthcare Utilization in Depression” and Grant No. 2014-PS-PSN-00006 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the U.S. Department of Justice's Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justice and Delinquency Prevention, the Office for Victims of Crime, and the SMART Office. Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice, NSF or NIH.
px | 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre |
a6d00f44ff8f83b6c1787e39333e759b0c3daf15 | a6d00f44ff8f83b6c1787e39333e759b0c3daf15_0 | Q: What are the differences in the use of images between gang member and the rest of the Twitter population?
Text: Introduction and Motivation
The crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world. Today, over 1.23 million people in the United States are members of a street gang BIBREF0 , BIBREF1 , which is a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise BIBREF2 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Moreover, data from the Centers for Disease Control in the United States suggests that the victims of at least 1.3% of all gang-related homicides are merely innocent bystanders who live in gang occupied neighborhoods BIBREF3 .
Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 .
Gang members are able to post publicly on Twitter without fear of consequences because there are few tools law enforcement can use to surveil this medium BIBREF10 . Police departments across the United States instead rely on manual processes to search social media for gang member profiles and to study their posts. For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF11 . Officer training is broadly limited to understanding policies on using Twitter in investigations and best practices for data storage BIBREF12 . The safety and security of city neighborhoods can thus be improved if law enforcement were equipped with intelligent tools to study social media for gang activity.
The need for better tools for law enforcement cannot be underscored enough. Recent news reports have shown that many incidents involving gangs start on Twitter, escalate over time, and lead to an offline event that could have been prevented by an early warning. For example, the media reported on a possible connection between the death of a teenage rapper from Illinois and the final set of tweets he posted. One of his last tweets linked to a video of him shouting vulgar words at a rival gang member who, in return, replied “I'ma kill you” on social media. In a following tweet, the teenage rapper posted “im on 069”, revealing his location, and was shot dead soon after that post. Subsequent investigation revealed that the rivalry leading to his death began and was carried out entirely on social media. Other reporting has revealed how innocent bystanders have also become targets in online fights, leaving everyone in a neighborhood at risk.
This paper investigates whether gang member profiles can be identified automatically on Twitter, which can enable better surveillance of gang members on social media. Classifying Twitter profiles into particular types of users has been done in other contexts BIBREF13 , BIBREF14 , BIBREF15 , but gang member profiles pose unique challenges. For example, many Twitter profile classifiers search for contextual clues in tweets and profile descriptions BIBREF16 , but gang member profiles use a rapidly changing lexicon of keywords and phrases that often have only a local, geographic context. This is illustrated in Figure FIGREF6 , which shows the Twitter profile descriptions of two verified deceased gang members. The profile of @OsoArrogantJoJo provides evidence that he belongs to a rival gang of the Black Disciples by #BDK, a hashtag that is only known to those involved with gang culture in Chicago. @PappyNotPapi's profile mentions #PBG and our investigations revealed that this hashtag is newly founded and stands for the Pooh Bear Gang, a gang that was formerly known as the Insane Cutthroat Gangsters. Given the very local, rapidly changing lexicon of gang members on social media, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, this study proposes heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music culture. A large set of gang member profiles, obtained through a careful data collection process, is compared against non-gang member profiles to find contrasting features. Experimental results show that using these sets of features, we can build a classifier that has a low false positive rate and a promising INLINEFORM0 -score of 0.7755.
This paper is organized as follows. Section SECREF2 discusses the related literature and positions how this work differs from other related works. Section SECREF3 discusses the data collection, manual feature selection and our approach to identify gang member profiles. Section SECREF4 gives a detailed explanation for evaluation of the proposed method and the results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Gang violence is a well studied social science topic dating back to 1927 BIBREF17 . However, the notions of “Cyber-” or “Internet banging”, which is defined as “the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization” BIBREF7 , was only recently introduced BIBREF18 , BIBREF10 . Patton et al. introduced the concept of “Internet banging” and studied how social media is now being used as a tool for gang self-promotion and as a way for gang members to gain and maintain street credibility BIBREF7 . They also discussed the relationship between gang-related crime and hip-hop culture, giving examples on how hip-hop music shared on social media websites targeted at harassing rival gang members often ended up in real-world collisions among those gangs. Decker et al. and Patton et al. have also reported that street gangs perform Internet banging with social media posts of videos depicting their illegal behaviors, threats to rival gangs, and firearms BIBREF19 , BIBREF20 .
The ability to take action on these discoveries is limited by the tools available to discover gang members on social media and to analyze the content they post BIBREF18 . Recent attempts to improve our abilities include a proposed architecture for a surveillance system that can learn the structure, function, and operation of gangs through what they post on social media BIBREF10 . However, the architecture requires a set of gang member profiles for input, thus assuming that they have already been discovered. Patton et al. BIBREF20 devised a method to automatically collect tweets from a group of gang members operating in Detroit, MI. However, their approach required the profile names of the gang members to be known beforehand, and data collection was localized to a single city in the country.
This work builds upon existing methods to automatically discover gang member profiles on Twitter. This type of user profile classification problem has been explored in a diverse set of applications such as political affiliation BIBREF13 , ethnicity BIBREF13 , gender BIBREF15 , predicting brand loyalty BIBREF13 , and user occupations BIBREF16 . However, these approaches may utilize an abundance of positive examples in their training data, and only rely on a single feature type (typically, tweet text). Whereas most profile classifiers focus on a single type of feature (e.g. profile text), we consider the use of a variety of feature types, including emoji, YouTube links, and photo features.
Discovering Gang Member Profiles
This section discusses the methodology we followed to study and classify the Twitter profiles of gang members automatically. It includes a semi-automatic data collection process to discover a large set of verifiable gang member profiles, an evaluation of the tweets of gang and non-gang member posts to identify promising features, and the deployment of multiple supervised learning algorithms to perform the classification.
Data collection
Discovering gang member profiles on Twitter to build training and testing datasets is a challenging task. Past strategies to find these profiles were to search for keywords, phrases, and events that are known to be related to gang activity in a particular city a priori BIBREF10 , BIBREF20 . However, such approaches are unlikely to yield adequate data to train an automatic classifier since gang members from different geographic locations and cultures use local languages, location-specific hashtags, and share information related to activities in a local region BIBREF10 . Such region-specific tweets and profiles may be used to train a classifier to find gang members within a small region but not across the Twitterverse. To overcome these limitations, we adopted a semi-automatic workflow, illustrated in Figure FIGREF7 , to build a dataset of gang member profiles suitable for training a classifier. The steps of the workflow are:
1. Seed Term Discovery: Following the success of identifying gang member profiles from Chicago BIBREF10 , we began our data collection with discovering universal terms used by gang members. We first searched for profiles with hashtags for Chicago gangs noted in BIBREF10 , namely #BDK (Black Disciple Killers) and #GDK (Gangster Disciples Killers). Those profiles were analyzed and manually verified as explained in Step 3. Analysis of these profiles identified a small set of hashtags they all use in their profile descriptions. Searching Twitter profiles using those hashtags, we observed that gang members across the U.S. use them, thus we consider those terms to be location neutral. For example, gang members post #FreeDaGuys in their profile to support their fellow members who are in jail, #RIPDaGuys to convey the grieving for fallen gang members, and #FuckDaOpps to show their hatred towards police officers. We used these terms as keywords to discover Twitter profiles irrespective of geographical location. We used the Followerwonk Web service API and Twitter REST API to search Twitter profile descriptions by keywords #FreeDaGuys, #FreeMyNigga, #RIPDaGuys, and #FuckDaOpps. Since there are different informal ways people spell a word in social media, we also considered variations on the spelling of each keyword; for example, for #FreeDaGuys, we searched both #FreeDaGuys, and #FreeTheGuys.
2. Gang Affiliated Rappers' Twitter Profile Discovery: Finding profiles by a small set of keywords is unlikely to yield sufficient data. Thus, we sought additional gang member profiles with an observation from Patton et al. BIBREF7 that the influence of hip-hop music and culture on offline gang member activities can also be seen in their social media posts. We thus also consider the influence of hip-hop culture on Twitter by exploring the Twitter network of known gangster rappers who were murdered in 2015 due to gang-related incidents. We searched for these rapper profiles on Twitter and manually checked that the rapper was affiliated to a gang.
3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions.
4. Using Retweets to discover more profiles: From the set of verified profiles, we explored their retweet and follower networks as a way to expand the dataset. We first considered authors of tweets which were retweeted by a gang member in our seed set. In Twitter, “retweeting” is a mechanism by which a user can share someone else's tweet to their follower audience. Assuming that a user only retweets things that they believe or their audience would be interested in, it may be reasonable to assume that gang members would only be interested in sharing what other gang members have to say, and hence, the authors of gang members' retweets could also be gang members.
5. Using Followers and Followees to discover more profiles: We analyzed followers and followees of our seed gang member profiles to find more gang member profiles. A Twitter user can follow other Twitter users so that the individual will be subscribed to their tweets as a follower and they will be able to start a private conversation by sending direct messages to the individual. Motivated by the sociological concept of homophily, which claims that individuals have a tendency to associate and bond with similar others, we hypothesized that the followers and followees of Twitter profiles from the seed set may also be gang members. Manual verification of Twitter profiles collected from retweets, followers, and followees of gang members showed that a majority of those profiles are non-gang members who are either family members, hip-hop artists, women or profiles with pornographic content. To ensure that our dataset is not biased towards a specific gang or geographic location, only a limited number of profiles were collected via retweets, followers and followees.
Table TABREF8 summarizes the number of profiles manually verified as gang members from Twitter profiles collected in step 1, 2, 4 and 5. Altogether we collected 400 gang member's Twitter profiles. This is a large number compared to previous studies of gang member activities on social media that curated a maximum of 91 profiles BIBREF10 . Moreover, we believe the profiles collected represent a diverse set of gang members that are not biased toward a particular geographic area or lingo as our data collection process used location-independent terms proven to be used by gang members when they express themselves.
Data analysis
We next explore differences between gang and non-gang member Twitter usage to find promising features for classifying profiles. For this purpose, profiles of non-gang members were collected from the Twitter Streaming API. We collected a random sample of tweets and the profiles of the users who authored the tweets in the random sample. We manually verified that all Twitter profiles collected in this approach belong to non-gang members. The profiles selected were then filtered by location to remove non-U.S. profiles by reverse geo-coding the location stated in their profile description by the Google Maps API. Profiles with location descriptions that were unspecified or did not relate to a location in the U.S. were discarded. We collected 2,000 non-gang member profiles in this manner. In addition, we added 865 manually verified non-gang member profiles collected using the location neutral keywords discussed in Section SECREF3 . Introducing these profiles, which have some characteristics of gang members (such as cursing frequently or cursing at law enforcement) but are not, captures local languages used by family/friends of gang members and ordinary people in a neighborhood where gangs operate.
With the Twitter REST API, we collected the maximum number of most recent tweets that can be retrieved (3,200) along with profile descriptions and images (profile and cover photos) of every gang and non-gang member profile. The resulting dataset consists of 400 gang member Twitter profiles and 2,865 non-gang member Twitter profiles. The dataset has a total of 821,412 tweets from gang member profiles and 7,238,758 tweets from non-gang member profiles. Prior to analyzing any text content, we removed all of the seed words used to find gang member profiles, all stop words, and performed stemming across all tweets and profile descriptions.
Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification.
On Twitter, a user can give a self-description as a part of the user's profile. A comparison of the top 10 words in gang members' and non-gang members' Twitter profile descriptions is shown in Figure FIGREF21 . The first 10 words are the most frequently used words in non-gang members' profiles and the latter 10 words are the most frequently used words in gang members' profiles. Word comparison shows that gang members prefer to use curse words (nigga, fuck, shit) in their profile descriptions while non-gang members use words related to their feelings or interests (love, life, live, music, book). The terms rip and free which appear in approximately INLINEFORM0 of all gang member Twitter profiles, suggest that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members. The term gang in gang members' profile descriptions suggest that gang members like to self-identify themselves on Twitter. Such lexical features may therefore be of great importance for automatically identifying gang member profiles. We take counts of unigrams from gang and non-gang members' Twitter profile descriptions as classification features.
It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member.
Recognizing the frequency with which gang members post YouTube links on gangster rap and hip-hop, we consider the YouTube videos posted in a user's tweets as features for the classifier. In particular, for each YouTube video tweeted, we used the YouTube API to retrieve the video's description and its comments. Further analysis of YouTube data showed a difference between terms in gang members' YouTube data and non-gang members' YouTube data. For example, the top 5 terms (after stemming and stop word removal) used in YouTube videos shared by gang members are shit, like, nigga, fuck, lil while like, love, peopl, song, get are the top 5 terms in non-gang member video data. To represent a user profile based on their music interests, we generated a bag of words from the video descriptions and comments from all shared videos.
Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier.
In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier.
Learning algorithms
The unigrams of tweets, profile text, and linked YouTube video descriptions and comments, along with the distribution of emoji symbols and the profile image tags were used to train four different classification models: a Naive Bayes net, a Logistic Regression, a Random Forest, and a Support Vector Machine (SVM). These four models were chosen because they are known to perform well over text features, which is the dominant type of feature considered. The performance of the models are empirically compared to determine the most suitable classification technique for this problem. Data for the models are represented as a vector of term frequencies where the terms were collected from one or more feature sets described above.
Evaluation
We next evaluate the performance of classifiers that use the above features to discover gang member profiles on Twitter. For this purpose, we use the training set discussed in Section SECREF3 with 400 gang member profiles (the `positive'/`gang' class) and 2,865 non-gang member profiles (the `negative'/`non-gang' class). We trained and evaluated the performance of the classifiers mentioned in Section SECREF31 under a 10-fold cross validation scheme. For each of the four learning algorithms, we consider variations involving only tweet text, emoji, profile, image, or music interest (YouTube comments and video description) features, and a final variant that considers all types of features together. The classifiers that use a single feature type were intended to help us study the quality of their predictive power by itself. When building these single-feature classifiers, we filtered the training dataset based on the availability of the single feature type in the training data. For example, we only used the twitter profiles that had at least a single emoji in their tweets to train classifiers that consider emoji features. We found 3,085 such profiles out of the 3,265 profiles in the training set. When all feature types were considered, we developed two different models:
Because a Twitter profile may not have every feature type, Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. In this model, the non-occurrence of a feature is represented by `zeroing out' the feature value during model training. Model(2) represents the ideal scenario where all profiles contain every feature type. For this model, we used 1,358 training instances (42% of all training instances), out of which 172 were gang members (43% of all gang members) and 1,186 were non-gang members (41% of all non-gang members). We used version 0.17.1 of scikit-learn machine learning library to implement the classifiers.
For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' and `non-gang' classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the positive `gang' and negative `non-gang' classes separately because of class imbalance in our dataset.
Experimental results
Table TABREF37 presents the average precision, recall, and INLINEFORM0 -score over the 10 folds for the single-feature and combined feature classifiers. The table includes, in braces (`{ }'), the number of gang and non-gang profiles that contain a particular feature type, and hence the number of profiles used for the 10-fold cross validation. It is reasonable to expect that any Twitter profile is not that of a gang member, predicting a Twitter user as a non-gang member is much easier than predicting a Twitter user as a gang member. Moreover false positive classifications of the `gang' class may be detrimental to law enforcement investigations, which may go awry as they surveil an innocent person based on the classifier's suggestion. We thus believe that a small false positive rate of the `gang' class to be an especially important evaluation metric. We say that a classifier is `ideal' if it demonstrates high precision, recall, and INLINEFORM1 -score for the `gang' class while performing well on the `non-gang' class as well.
The best performing classifier that considers single features is a Random Forest model over tweet features (T), with a reasonable INLINEFORM0 -score of 0.7229 for the `gang' class. It also features the highest INLINEFORM1 -score for the `non-gang' class (0.9671). Its strong performance is intuitive given the striking differences in language as shown in Figure FIGREF14 and discussed in Section UID22 . We also noted that music features offer promising results, with an INLINEFORM2 -score of 0.6505 with a Naive Bayes classifier, as well as emoji features with an INLINEFORM3 -score of 0.6067 also achieved by a Naive Bayes classifier. However, the use of profile data and image tags by themselves yield relatively poor INLINEFORM4 -scores no matter which classifier considered. There may be two reasons for this despite the differences we observed in Section SECREF17 . First, these two feature types did not generate a large number of specific features for learning. For example, descriptions are limited to just 160 characters per profile, leading to a limited number of unigrams (in our dataset, 10 on average) that can be used to train the classifiers. Second, the profile images were tagged by a third party Web service which is not specifically designed to identify gang hand signs, drugs and guns, which are often shared by gang members. This led to a small set of image tags in their profiles that were fairly generic, i.e., the image tags in Figure FIGREF36 such as `people', `man', and `adult'.
Combining these diverse sets of features into a single classifier yields even better results. Our results for Model(1) show that the Random Forest achieves the highest INLINEFORM0 -scores for both `gang' (0.7364) and `non-gang' (0.9690) classes and yields the best precision of 0.8792, which corresponds to a low false positive rate when labeling a profile as a gang member. Despite the fact that it has lower positive recall compared to the second best performing classifier (a Random Forest trained over only tweet text features (T)), for this problem setting, we should be willing to increase the chance that a gang member will go unclassified if it means reducing the chance of applying a `gang' label to a non-gang member. When we tested Model(2), a Random Forrest classifier achieved an INLINEFORM1 -score of 0.7755 (improvement of 7.28% with respect to the best performing single feature type classifier (T)) for `gang' class with a precision of 0.8961 (improvement of 6.26% with respect to (T)) and a recall of 0.6994 (improvement of 9.26% with respect to (T)). Model(2) thus outperforms Model(1), and we expect its performance to improve with the availability of more training data with all feature types.
px
Evaluation Over Unseen Profiles
We also tested the trained classifiers using a set of Twitter profiles from a separate data collection process that may emulate the classifier's operation in a real-time setting. For this experiment, we captured real-time tweets from Los Angeles, CA and from ten South Side, Chicago neighborhoods that are known for gang-related activities BIBREF10 using the Twitter streaming API. We consider these areas with known gang presence on social media to ensure that some positive profiles would appear in our test set. We ultimately collected 24,162 Twitter profiles: 15,662 from Los Angeles, and 8,500 from Chicago. We populated data for each profile by using the 3,200 most recent tweets (the maximum that can be collected from Twitter's API) for each profile. Since the 24,162 profiles are far too many to label manually, we qualitatively study those profiles the classifier placed into the `gang' class.
We used the training dataset to train our best performing random forest classifier (which use all feature types) and tested it on the test dataset. We then analyzed the Twitter profiles that our classifier labeled as belonging to the `gang' class. Each of those profiles had several features which overlap with gang members such as displaying hand signs and weapons in their profile images or in videos posted by them, gang names or gang-related hashtags in their profile descriptions, frequent use of curse words, and the use of terms such as “my homie" to refer to self-identified gang members. Representative tweets extracted from those profiles are depicted in Figure FIGREF41 . The most frequent words found in tweets from those profiles were shit, nigga, got, bitch, go, fuck etc. and their user profiles had terms such as free, artist, shit, fuck, freedagang, and ripthefallen. They had frequently used emojis such as face with tears of joy, hundred points symbol, fire, skull, money bag, and pistol. For some profiles, it was less obvious that the classifier correctly identified a gang member. Such profiles used the same emojis and curse words commonly found in gang members profiles, but their profile picture and tweet content was not indicative of a gang affiliation. In conclusion, we find that in a real-time-like setting, the classifier to be able to extract profiles with features that strongly suggest gang affiliation. Of course, these profiles demand further investigation and extensive evidence from other sources in order to draw a concrete conclusion, especially in the context of a law enforcement investigation. We refrain from reporting any profile names or specific details about the profiles labeled as a `gang' member to comply with the applicable IRB governing this human subject research.
px
Conclusion and Future Work
This paper presented an approach to address the problem of automatically identifying gang member profiles on Twitter. Despite the challenges in developing such automated systems, mainly due to difficulties in finding online gang member profiles for developing training datasets, we proposed an approach that uses features extracted from textual descriptions, emojis, images and videos shared on Twitter (textual features extracted from images, and videos). Exploratory analysis of these types of features revealed interesting, and sometimes striking differences in the ways gang and non-gang members use Twitter. Classifiers trained over features that highlight these differences, were evaluated under 10-fold cross validation. Our best classifier achieved a promising INLINEFORM0 -score of 0.7755 over the `gang' profiles when all types of features were considered.
Future work will strengthen our training dataset by including more gang member Twitter profiles by searching for more location-independent keywords. We also plan to develop our own image classification system specifically designed to classify images found on gang member profiles. We would also like to experiment with building dictionaries that contain gang names to understand whether “having a gang name in the profile description” as a feature can improve our results. Finally, we would also like to study how can we further improve our classifier models using word embeddings BIBREF23 and social networks of known gang members.
px
Acknowledgement
We are thankful to Uday Kiran Yeda for helping us with data collection. We acknowledge partial support from the National Science Foundation (NSF) award: CNS-1513721: “Context-Aware Harassment Detection on Social Media”, National Institutes of Health (NIH) award: MH105384-01A1: “Modeling Social Behavior for Healthcare Utilization in Depression” and Grant No. 2014-PS-PSN-00006 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the U.S. Department of Justice's Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justice and Delinquency Prevention, the Office for Victims of Crime, and the SMART Office. Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice, NSF or NIH.
px | user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash |
0d4aa05eb00d9dee74000ea5b21b08f693ba1e62 | 0d4aa05eb00d9dee74000ea5b21b08f693ba1e62_0 | Q: What are the differences in language use between gang member and the rest of the Twitter population?
Text: Introduction and Motivation
The crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world. Today, over 1.23 million people in the United States are members of a street gang BIBREF0 , BIBREF1 , which is a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise BIBREF2 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Moreover, data from the Centers for Disease Control in the United States suggests that the victims of at least 1.3% of all gang-related homicides are merely innocent bystanders who live in gang occupied neighborhoods BIBREF3 .
Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 .
Gang members are able to post publicly on Twitter without fear of consequences because there are few tools law enforcement can use to surveil this medium BIBREF10 . Police departments across the United States instead rely on manual processes to search social media for gang member profiles and to study their posts. For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF11 . Officer training is broadly limited to understanding policies on using Twitter in investigations and best practices for data storage BIBREF12 . The safety and security of city neighborhoods can thus be improved if law enforcement were equipped with intelligent tools to study social media for gang activity.
The need for better tools for law enforcement cannot be underscored enough. Recent news reports have shown that many incidents involving gangs start on Twitter, escalate over time, and lead to an offline event that could have been prevented by an early warning. For example, the media reported on a possible connection between the death of a teenage rapper from Illinois and the final set of tweets he posted. One of his last tweets linked to a video of him shouting vulgar words at a rival gang member who, in return, replied “I'ma kill you” on social media. In a following tweet, the teenage rapper posted “im on 069”, revealing his location, and was shot dead soon after that post. Subsequent investigation revealed that the rivalry leading to his death began and was carried out entirely on social media. Other reporting has revealed how innocent bystanders have also become targets in online fights, leaving everyone in a neighborhood at risk.
This paper investigates whether gang member profiles can be identified automatically on Twitter, which can enable better surveillance of gang members on social media. Classifying Twitter profiles into particular types of users has been done in other contexts BIBREF13 , BIBREF14 , BIBREF15 , but gang member profiles pose unique challenges. For example, many Twitter profile classifiers search for contextual clues in tweets and profile descriptions BIBREF16 , but gang member profiles use a rapidly changing lexicon of keywords and phrases that often have only a local, geographic context. This is illustrated in Figure FIGREF6 , which shows the Twitter profile descriptions of two verified deceased gang members. The profile of @OsoArrogantJoJo provides evidence that he belongs to a rival gang of the Black Disciples by #BDK, a hashtag that is only known to those involved with gang culture in Chicago. @PappyNotPapi's profile mentions #PBG and our investigations revealed that this hashtag is newly founded and stands for the Pooh Bear Gang, a gang that was formerly known as the Insane Cutthroat Gangsters. Given the very local, rapidly changing lexicon of gang members on social media, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, this study proposes heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music culture. A large set of gang member profiles, obtained through a careful data collection process, is compared against non-gang member profiles to find contrasting features. Experimental results show that using these sets of features, we can build a classifier that has a low false positive rate and a promising INLINEFORM0 -score of 0.7755.
This paper is organized as follows. Section SECREF2 discusses the related literature and positions how this work differs from other related works. Section SECREF3 discusses the data collection, manual feature selection and our approach to identify gang member profiles. Section SECREF4 gives a detailed explanation for evaluation of the proposed method and the results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Gang violence is a well studied social science topic dating back to 1927 BIBREF17 . However, the notions of “Cyber-” or “Internet banging”, which is defined as “the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization” BIBREF7 , was only recently introduced BIBREF18 , BIBREF10 . Patton et al. introduced the concept of “Internet banging” and studied how social media is now being used as a tool for gang self-promotion and as a way for gang members to gain and maintain street credibility BIBREF7 . They also discussed the relationship between gang-related crime and hip-hop culture, giving examples on how hip-hop music shared on social media websites targeted at harassing rival gang members often ended up in real-world collisions among those gangs. Decker et al. and Patton et al. have also reported that street gangs perform Internet banging with social media posts of videos depicting their illegal behaviors, threats to rival gangs, and firearms BIBREF19 , BIBREF20 .
The ability to take action on these discoveries is limited by the tools available to discover gang members on social media and to analyze the content they post BIBREF18 . Recent attempts to improve our abilities include a proposed architecture for a surveillance system that can learn the structure, function, and operation of gangs through what they post on social media BIBREF10 . However, the architecture requires a set of gang member profiles for input, thus assuming that they have already been discovered. Patton et al. BIBREF20 devised a method to automatically collect tweets from a group of gang members operating in Detroit, MI. However, their approach required the profile names of the gang members to be known beforehand, and data collection was localized to a single city in the country.
This work builds upon existing methods to automatically discover gang member profiles on Twitter. This type of user profile classification problem has been explored in a diverse set of applications such as political affiliation BIBREF13 , ethnicity BIBREF13 , gender BIBREF15 , predicting brand loyalty BIBREF13 , and user occupations BIBREF16 . However, these approaches may utilize an abundance of positive examples in their training data, and only rely on a single feature type (typically, tweet text). Whereas most profile classifiers focus on a single type of feature (e.g. profile text), we consider the use of a variety of feature types, including emoji, YouTube links, and photo features.
Discovering Gang Member Profiles
This section discusses the methodology we followed to study and classify the Twitter profiles of gang members automatically. It includes a semi-automatic data collection process to discover a large set of verifiable gang member profiles, an evaluation of the tweets of gang and non-gang member posts to identify promising features, and the deployment of multiple supervised learning algorithms to perform the classification.
Data collection
Discovering gang member profiles on Twitter to build training and testing datasets is a challenging task. Past strategies to find these profiles were to search for keywords, phrases, and events that are known to be related to gang activity in a particular city a priori BIBREF10 , BIBREF20 . However, such approaches are unlikely to yield adequate data to train an automatic classifier since gang members from different geographic locations and cultures use local languages, location-specific hashtags, and share information related to activities in a local region BIBREF10 . Such region-specific tweets and profiles may be used to train a classifier to find gang members within a small region but not across the Twitterverse. To overcome these limitations, we adopted a semi-automatic workflow, illustrated in Figure FIGREF7 , to build a dataset of gang member profiles suitable for training a classifier. The steps of the workflow are:
1. Seed Term Discovery: Following the success of identifying gang member profiles from Chicago BIBREF10 , we began our data collection with discovering universal terms used by gang members. We first searched for profiles with hashtags for Chicago gangs noted in BIBREF10 , namely #BDK (Black Disciple Killers) and #GDK (Gangster Disciples Killers). Those profiles were analyzed and manually verified as explained in Step 3. Analysis of these profiles identified a small set of hashtags they all use in their profile descriptions. Searching Twitter profiles using those hashtags, we observed that gang members across the U.S. use them, thus we consider those terms to be location neutral. For example, gang members post #FreeDaGuys in their profile to support their fellow members who are in jail, #RIPDaGuys to convey the grieving for fallen gang members, and #FuckDaOpps to show their hatred towards police officers. We used these terms as keywords to discover Twitter profiles irrespective of geographical location. We used the Followerwonk Web service API and Twitter REST API to search Twitter profile descriptions by keywords #FreeDaGuys, #FreeMyNigga, #RIPDaGuys, and #FuckDaOpps. Since there are different informal ways people spell a word in social media, we also considered variations on the spelling of each keyword; for example, for #FreeDaGuys, we searched both #FreeDaGuys, and #FreeTheGuys.
2. Gang Affiliated Rappers' Twitter Profile Discovery: Finding profiles by a small set of keywords is unlikely to yield sufficient data. Thus, we sought additional gang member profiles with an observation from Patton et al. BIBREF7 that the influence of hip-hop music and culture on offline gang member activities can also be seen in their social media posts. We thus also consider the influence of hip-hop culture on Twitter by exploring the Twitter network of known gangster rappers who were murdered in 2015 due to gang-related incidents. We searched for these rapper profiles on Twitter and manually checked that the rapper was affiliated to a gang.
3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions.
4. Using Retweets to discover more profiles: From the set of verified profiles, we explored their retweet and follower networks as a way to expand the dataset. We first considered authors of tweets which were retweeted by a gang member in our seed set. In Twitter, “retweeting” is a mechanism by which a user can share someone else's tweet to their follower audience. Assuming that a user only retweets things that they believe or their audience would be interested in, it may be reasonable to assume that gang members would only be interested in sharing what other gang members have to say, and hence, the authors of gang members' retweets could also be gang members.
5. Using Followers and Followees to discover more profiles: We analyzed followers and followees of our seed gang member profiles to find more gang member profiles. A Twitter user can follow other Twitter users so that the individual will be subscribed to their tweets as a follower and they will be able to start a private conversation by sending direct messages to the individual. Motivated by the sociological concept of homophily, which claims that individuals have a tendency to associate and bond with similar others, we hypothesized that the followers and followees of Twitter profiles from the seed set may also be gang members. Manual verification of Twitter profiles collected from retweets, followers, and followees of gang members showed that a majority of those profiles are non-gang members who are either family members, hip-hop artists, women or profiles with pornographic content. To ensure that our dataset is not biased towards a specific gang or geographic location, only a limited number of profiles were collected via retweets, followers and followees.
Table TABREF8 summarizes the number of profiles manually verified as gang members from Twitter profiles collected in step 1, 2, 4 and 5. Altogether we collected 400 gang member's Twitter profiles. This is a large number compared to previous studies of gang member activities on social media that curated a maximum of 91 profiles BIBREF10 . Moreover, we believe the profiles collected represent a diverse set of gang members that are not biased toward a particular geographic area or lingo as our data collection process used location-independent terms proven to be used by gang members when they express themselves.
Data analysis
We next explore differences between gang and non-gang member Twitter usage to find promising features for classifying profiles. For this purpose, profiles of non-gang members were collected from the Twitter Streaming API. We collected a random sample of tweets and the profiles of the users who authored the tweets in the random sample. We manually verified that all Twitter profiles collected in this approach belong to non-gang members. The profiles selected were then filtered by location to remove non-U.S. profiles by reverse geo-coding the location stated in their profile description by the Google Maps API. Profiles with location descriptions that were unspecified or did not relate to a location in the U.S. were discarded. We collected 2,000 non-gang member profiles in this manner. In addition, we added 865 manually verified non-gang member profiles collected using the location neutral keywords discussed in Section SECREF3 . Introducing these profiles, which have some characteristics of gang members (such as cursing frequently or cursing at law enforcement) but are not, captures local languages used by family/friends of gang members and ordinary people in a neighborhood where gangs operate.
With the Twitter REST API, we collected the maximum number of most recent tweets that can be retrieved (3,200) along with profile descriptions and images (profile and cover photos) of every gang and non-gang member profile. The resulting dataset consists of 400 gang member Twitter profiles and 2,865 non-gang member Twitter profiles. The dataset has a total of 821,412 tweets from gang member profiles and 7,238,758 tweets from non-gang member profiles. Prior to analyzing any text content, we removed all of the seed words used to find gang member profiles, all stop words, and performed stemming across all tweets and profile descriptions.
Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification.
On Twitter, a user can give a self-description as a part of the user's profile. A comparison of the top 10 words in gang members' and non-gang members' Twitter profile descriptions is shown in Figure FIGREF21 . The first 10 words are the most frequently used words in non-gang members' profiles and the latter 10 words are the most frequently used words in gang members' profiles. Word comparison shows that gang members prefer to use curse words (nigga, fuck, shit) in their profile descriptions while non-gang members use words related to their feelings or interests (love, life, live, music, book). The terms rip and free which appear in approximately INLINEFORM0 of all gang member Twitter profiles, suggest that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members. The term gang in gang members' profile descriptions suggest that gang members like to self-identify themselves on Twitter. Such lexical features may therefore be of great importance for automatically identifying gang member profiles. We take counts of unigrams from gang and non-gang members' Twitter profile descriptions as classification features.
It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member.
Recognizing the frequency with which gang members post YouTube links on gangster rap and hip-hop, we consider the YouTube videos posted in a user's tweets as features for the classifier. In particular, for each YouTube video tweeted, we used the YouTube API to retrieve the video's description and its comments. Further analysis of YouTube data showed a difference between terms in gang members' YouTube data and non-gang members' YouTube data. For example, the top 5 terms (after stemming and stop word removal) used in YouTube videos shared by gang members are shit, like, nigga, fuck, lil while like, love, peopl, song, get are the top 5 terms in non-gang member video data. To represent a user profile based on their music interests, we generated a bag of words from the video descriptions and comments from all shared videos.
Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier.
In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier.
Learning algorithms
The unigrams of tweets, profile text, and linked YouTube video descriptions and comments, along with the distribution of emoji symbols and the profile image tags were used to train four different classification models: a Naive Bayes net, a Logistic Regression, a Random Forest, and a Support Vector Machine (SVM). These four models were chosen because they are known to perform well over text features, which is the dominant type of feature considered. The performance of the models are empirically compared to determine the most suitable classification technique for this problem. Data for the models are represented as a vector of term frequencies where the terms were collected from one or more feature sets described above.
Evaluation
We next evaluate the performance of classifiers that use the above features to discover gang member profiles on Twitter. For this purpose, we use the training set discussed in Section SECREF3 with 400 gang member profiles (the `positive'/`gang' class) and 2,865 non-gang member profiles (the `negative'/`non-gang' class). We trained and evaluated the performance of the classifiers mentioned in Section SECREF31 under a 10-fold cross validation scheme. For each of the four learning algorithms, we consider variations involving only tweet text, emoji, profile, image, or music interest (YouTube comments and video description) features, and a final variant that considers all types of features together. The classifiers that use a single feature type were intended to help us study the quality of their predictive power by itself. When building these single-feature classifiers, we filtered the training dataset based on the availability of the single feature type in the training data. For example, we only used the twitter profiles that had at least a single emoji in their tweets to train classifiers that consider emoji features. We found 3,085 such profiles out of the 3,265 profiles in the training set. When all feature types were considered, we developed two different models:
Because a Twitter profile may not have every feature type, Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. In this model, the non-occurrence of a feature is represented by `zeroing out' the feature value during model training. Model(2) represents the ideal scenario where all profiles contain every feature type. For this model, we used 1,358 training instances (42% of all training instances), out of which 172 were gang members (43% of all gang members) and 1,186 were non-gang members (41% of all non-gang members). We used version 0.17.1 of scikit-learn machine learning library to implement the classifiers.
For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' and `non-gang' classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the positive `gang' and negative `non-gang' classes separately because of class imbalance in our dataset.
Experimental results
Table TABREF37 presents the average precision, recall, and INLINEFORM0 -score over the 10 folds for the single-feature and combined feature classifiers. The table includes, in braces (`{ }'), the number of gang and non-gang profiles that contain a particular feature type, and hence the number of profiles used for the 10-fold cross validation. It is reasonable to expect that any Twitter profile is not that of a gang member, predicting a Twitter user as a non-gang member is much easier than predicting a Twitter user as a gang member. Moreover false positive classifications of the `gang' class may be detrimental to law enforcement investigations, which may go awry as they surveil an innocent person based on the classifier's suggestion. We thus believe that a small false positive rate of the `gang' class to be an especially important evaluation metric. We say that a classifier is `ideal' if it demonstrates high precision, recall, and INLINEFORM1 -score for the `gang' class while performing well on the `non-gang' class as well.
The best performing classifier that considers single features is a Random Forest model over tweet features (T), with a reasonable INLINEFORM0 -score of 0.7229 for the `gang' class. It also features the highest INLINEFORM1 -score for the `non-gang' class (0.9671). Its strong performance is intuitive given the striking differences in language as shown in Figure FIGREF14 and discussed in Section UID22 . We also noted that music features offer promising results, with an INLINEFORM2 -score of 0.6505 with a Naive Bayes classifier, as well as emoji features with an INLINEFORM3 -score of 0.6067 also achieved by a Naive Bayes classifier. However, the use of profile data and image tags by themselves yield relatively poor INLINEFORM4 -scores no matter which classifier considered. There may be two reasons for this despite the differences we observed in Section SECREF17 . First, these two feature types did not generate a large number of specific features for learning. For example, descriptions are limited to just 160 characters per profile, leading to a limited number of unigrams (in our dataset, 10 on average) that can be used to train the classifiers. Second, the profile images were tagged by a third party Web service which is not specifically designed to identify gang hand signs, drugs and guns, which are often shared by gang members. This led to a small set of image tags in their profiles that were fairly generic, i.e., the image tags in Figure FIGREF36 such as `people', `man', and `adult'.
Combining these diverse sets of features into a single classifier yields even better results. Our results for Model(1) show that the Random Forest achieves the highest INLINEFORM0 -scores for both `gang' (0.7364) and `non-gang' (0.9690) classes and yields the best precision of 0.8792, which corresponds to a low false positive rate when labeling a profile as a gang member. Despite the fact that it has lower positive recall compared to the second best performing classifier (a Random Forest trained over only tweet text features (T)), for this problem setting, we should be willing to increase the chance that a gang member will go unclassified if it means reducing the chance of applying a `gang' label to a non-gang member. When we tested Model(2), a Random Forrest classifier achieved an INLINEFORM1 -score of 0.7755 (improvement of 7.28% with respect to the best performing single feature type classifier (T)) for `gang' class with a precision of 0.8961 (improvement of 6.26% with respect to (T)) and a recall of 0.6994 (improvement of 9.26% with respect to (T)). Model(2) thus outperforms Model(1), and we expect its performance to improve with the availability of more training data with all feature types.
px
Evaluation Over Unseen Profiles
We also tested the trained classifiers using a set of Twitter profiles from a separate data collection process that may emulate the classifier's operation in a real-time setting. For this experiment, we captured real-time tweets from Los Angeles, CA and from ten South Side, Chicago neighborhoods that are known for gang-related activities BIBREF10 using the Twitter streaming API. We consider these areas with known gang presence on social media to ensure that some positive profiles would appear in our test set. We ultimately collected 24,162 Twitter profiles: 15,662 from Los Angeles, and 8,500 from Chicago. We populated data for each profile by using the 3,200 most recent tweets (the maximum that can be collected from Twitter's API) for each profile. Since the 24,162 profiles are far too many to label manually, we qualitatively study those profiles the classifier placed into the `gang' class.
We used the training dataset to train our best performing random forest classifier (which use all feature types) and tested it on the test dataset. We then analyzed the Twitter profiles that our classifier labeled as belonging to the `gang' class. Each of those profiles had several features which overlap with gang members such as displaying hand signs and weapons in their profile images or in videos posted by them, gang names or gang-related hashtags in their profile descriptions, frequent use of curse words, and the use of terms such as “my homie" to refer to self-identified gang members. Representative tweets extracted from those profiles are depicted in Figure FIGREF41 . The most frequent words found in tweets from those profiles were shit, nigga, got, bitch, go, fuck etc. and their user profiles had terms such as free, artist, shit, fuck, freedagang, and ripthefallen. They had frequently used emojis such as face with tears of joy, hundred points symbol, fire, skull, money bag, and pistol. For some profiles, it was less obvious that the classifier correctly identified a gang member. Such profiles used the same emojis and curse words commonly found in gang members profiles, but their profile picture and tweet content was not indicative of a gang affiliation. In conclusion, we find that in a real-time-like setting, the classifier to be able to extract profiles with features that strongly suggest gang affiliation. Of course, these profiles demand further investigation and extensive evidence from other sources in order to draw a concrete conclusion, especially in the context of a law enforcement investigation. We refrain from reporting any profile names or specific details about the profiles labeled as a `gang' member to comply with the applicable IRB governing this human subject research.
px
Conclusion and Future Work
This paper presented an approach to address the problem of automatically identifying gang member profiles on Twitter. Despite the challenges in developing such automated systems, mainly due to difficulties in finding online gang member profiles for developing training datasets, we proposed an approach that uses features extracted from textual descriptions, emojis, images and videos shared on Twitter (textual features extracted from images, and videos). Exploratory analysis of these types of features revealed interesting, and sometimes striking differences in the ways gang and non-gang members use Twitter. Classifiers trained over features that highlight these differences, were evaluated under 10-fold cross validation. Our best classifier achieved a promising INLINEFORM0 -score of 0.7755 over the `gang' profiles when all types of features were considered.
Future work will strengthen our training dataset by including more gang member Twitter profiles by searching for more location-independent keywords. We also plan to develop our own image classification system specifically designed to classify images found on gang member profiles. We would also like to experiment with building dictionaries that contain gang names to understand whether “having a gang name in the profile description” as a feature can improve our results. Finally, we would also like to study how can we further improve our classifier models using word embeddings BIBREF23 and social networks of known gang members.
px
Acknowledgement
We are thankful to Uday Kiran Yeda for helping us with data collection. We acknowledge partial support from the National Science Foundation (NSF) award: CNS-1513721: “Context-Aware Harassment Detection on Social Media”, National Institutes of Health (NIH) award: MH105384-01A1: “Modeling Social Behavior for Healthcare Utilization in Depression” and Grant No. 2014-PS-PSN-00006 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the U.S. Department of Justice's Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justice and Delinquency Prevention, the Office for Victims of Crime, and the SMART Office. Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice, NSF or NIH.
px | Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us |
382bef47d316d7c12ea190ae160bf0912a0f4ffe | 382bef47d316d7c12ea190ae160bf0912a0f4ffe_0 | Q: How is gang membership verified?
Text: Introduction and Motivation
The crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world. Today, over 1.23 million people in the United States are members of a street gang BIBREF0 , BIBREF1 , which is a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise BIBREF2 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Moreover, data from the Centers for Disease Control in the United States suggests that the victims of at least 1.3% of all gang-related homicides are merely innocent bystanders who live in gang occupied neighborhoods BIBREF3 .
Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 .
Gang members are able to post publicly on Twitter without fear of consequences because there are few tools law enforcement can use to surveil this medium BIBREF10 . Police departments across the United States instead rely on manual processes to search social media for gang member profiles and to study their posts. For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF11 . Officer training is broadly limited to understanding policies on using Twitter in investigations and best practices for data storage BIBREF12 . The safety and security of city neighborhoods can thus be improved if law enforcement were equipped with intelligent tools to study social media for gang activity.
The need for better tools for law enforcement cannot be underscored enough. Recent news reports have shown that many incidents involving gangs start on Twitter, escalate over time, and lead to an offline event that could have been prevented by an early warning. For example, the media reported on a possible connection between the death of a teenage rapper from Illinois and the final set of tweets he posted. One of his last tweets linked to a video of him shouting vulgar words at a rival gang member who, in return, replied “I'ma kill you” on social media. In a following tweet, the teenage rapper posted “im on 069”, revealing his location, and was shot dead soon after that post. Subsequent investigation revealed that the rivalry leading to his death began and was carried out entirely on social media. Other reporting has revealed how innocent bystanders have also become targets in online fights, leaving everyone in a neighborhood at risk.
This paper investigates whether gang member profiles can be identified automatically on Twitter, which can enable better surveillance of gang members on social media. Classifying Twitter profiles into particular types of users has been done in other contexts BIBREF13 , BIBREF14 , BIBREF15 , but gang member profiles pose unique challenges. For example, many Twitter profile classifiers search for contextual clues in tweets and profile descriptions BIBREF16 , but gang member profiles use a rapidly changing lexicon of keywords and phrases that often have only a local, geographic context. This is illustrated in Figure FIGREF6 , which shows the Twitter profile descriptions of two verified deceased gang members. The profile of @OsoArrogantJoJo provides evidence that he belongs to a rival gang of the Black Disciples by #BDK, a hashtag that is only known to those involved with gang culture in Chicago. @PappyNotPapi's profile mentions #PBG and our investigations revealed that this hashtag is newly founded and stands for the Pooh Bear Gang, a gang that was formerly known as the Insane Cutthroat Gangsters. Given the very local, rapidly changing lexicon of gang members on social media, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, this study proposes heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music culture. A large set of gang member profiles, obtained through a careful data collection process, is compared against non-gang member profiles to find contrasting features. Experimental results show that using these sets of features, we can build a classifier that has a low false positive rate and a promising INLINEFORM0 -score of 0.7755.
This paper is organized as follows. Section SECREF2 discusses the related literature and positions how this work differs from other related works. Section SECREF3 discusses the data collection, manual feature selection and our approach to identify gang member profiles. Section SECREF4 gives a detailed explanation for evaluation of the proposed method and the results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Gang violence is a well studied social science topic dating back to 1927 BIBREF17 . However, the notions of “Cyber-” or “Internet banging”, which is defined as “the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization” BIBREF7 , was only recently introduced BIBREF18 , BIBREF10 . Patton et al. introduced the concept of “Internet banging” and studied how social media is now being used as a tool for gang self-promotion and as a way for gang members to gain and maintain street credibility BIBREF7 . They also discussed the relationship between gang-related crime and hip-hop culture, giving examples on how hip-hop music shared on social media websites targeted at harassing rival gang members often ended up in real-world collisions among those gangs. Decker et al. and Patton et al. have also reported that street gangs perform Internet banging with social media posts of videos depicting their illegal behaviors, threats to rival gangs, and firearms BIBREF19 , BIBREF20 .
The ability to take action on these discoveries is limited by the tools available to discover gang members on social media and to analyze the content they post BIBREF18 . Recent attempts to improve our abilities include a proposed architecture for a surveillance system that can learn the structure, function, and operation of gangs through what they post on social media BIBREF10 . However, the architecture requires a set of gang member profiles for input, thus assuming that they have already been discovered. Patton et al. BIBREF20 devised a method to automatically collect tweets from a group of gang members operating in Detroit, MI. However, their approach required the profile names of the gang members to be known beforehand, and data collection was localized to a single city in the country.
This work builds upon existing methods to automatically discover gang member profiles on Twitter. This type of user profile classification problem has been explored in a diverse set of applications such as political affiliation BIBREF13 , ethnicity BIBREF13 , gender BIBREF15 , predicting brand loyalty BIBREF13 , and user occupations BIBREF16 . However, these approaches may utilize an abundance of positive examples in their training data, and only rely on a single feature type (typically, tweet text). Whereas most profile classifiers focus on a single type of feature (e.g. profile text), we consider the use of a variety of feature types, including emoji, YouTube links, and photo features.
Discovering Gang Member Profiles
This section discusses the methodology we followed to study and classify the Twitter profiles of gang members automatically. It includes a semi-automatic data collection process to discover a large set of verifiable gang member profiles, an evaluation of the tweets of gang and non-gang member posts to identify promising features, and the deployment of multiple supervised learning algorithms to perform the classification.
Data collection
Discovering gang member profiles on Twitter to build training and testing datasets is a challenging task. Past strategies to find these profiles were to search for keywords, phrases, and events that are known to be related to gang activity in a particular city a priori BIBREF10 , BIBREF20 . However, such approaches are unlikely to yield adequate data to train an automatic classifier since gang members from different geographic locations and cultures use local languages, location-specific hashtags, and share information related to activities in a local region BIBREF10 . Such region-specific tweets and profiles may be used to train a classifier to find gang members within a small region but not across the Twitterverse. To overcome these limitations, we adopted a semi-automatic workflow, illustrated in Figure FIGREF7 , to build a dataset of gang member profiles suitable for training a classifier. The steps of the workflow are:
1. Seed Term Discovery: Following the success of identifying gang member profiles from Chicago BIBREF10 , we began our data collection with discovering universal terms used by gang members. We first searched for profiles with hashtags for Chicago gangs noted in BIBREF10 , namely #BDK (Black Disciple Killers) and #GDK (Gangster Disciples Killers). Those profiles were analyzed and manually verified as explained in Step 3. Analysis of these profiles identified a small set of hashtags they all use in their profile descriptions. Searching Twitter profiles using those hashtags, we observed that gang members across the U.S. use them, thus we consider those terms to be location neutral. For example, gang members post #FreeDaGuys in their profile to support their fellow members who are in jail, #RIPDaGuys to convey the grieving for fallen gang members, and #FuckDaOpps to show their hatred towards police officers. We used these terms as keywords to discover Twitter profiles irrespective of geographical location. We used the Followerwonk Web service API and Twitter REST API to search Twitter profile descriptions by keywords #FreeDaGuys, #FreeMyNigga, #RIPDaGuys, and #FuckDaOpps. Since there are different informal ways people spell a word in social media, we also considered variations on the spelling of each keyword; for example, for #FreeDaGuys, we searched both #FreeDaGuys, and #FreeTheGuys.
2. Gang Affiliated Rappers' Twitter Profile Discovery: Finding profiles by a small set of keywords is unlikely to yield sufficient data. Thus, we sought additional gang member profiles with an observation from Patton et al. BIBREF7 that the influence of hip-hop music and culture on offline gang member activities can also be seen in their social media posts. We thus also consider the influence of hip-hop culture on Twitter by exploring the Twitter network of known gangster rappers who were murdered in 2015 due to gang-related incidents. We searched for these rapper profiles on Twitter and manually checked that the rapper was affiliated to a gang.
3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions.
4. Using Retweets to discover more profiles: From the set of verified profiles, we explored their retweet and follower networks as a way to expand the dataset. We first considered authors of tweets which were retweeted by a gang member in our seed set. In Twitter, “retweeting” is a mechanism by which a user can share someone else's tweet to their follower audience. Assuming that a user only retweets things that they believe or their audience would be interested in, it may be reasonable to assume that gang members would only be interested in sharing what other gang members have to say, and hence, the authors of gang members' retweets could also be gang members.
5. Using Followers and Followees to discover more profiles: We analyzed followers and followees of our seed gang member profiles to find more gang member profiles. A Twitter user can follow other Twitter users so that the individual will be subscribed to their tweets as a follower and they will be able to start a private conversation by sending direct messages to the individual. Motivated by the sociological concept of homophily, which claims that individuals have a tendency to associate and bond with similar others, we hypothesized that the followers and followees of Twitter profiles from the seed set may also be gang members. Manual verification of Twitter profiles collected from retweets, followers, and followees of gang members showed that a majority of those profiles are non-gang members who are either family members, hip-hop artists, women or profiles with pornographic content. To ensure that our dataset is not biased towards a specific gang or geographic location, only a limited number of profiles were collected via retweets, followers and followees.
Table TABREF8 summarizes the number of profiles manually verified as gang members from Twitter profiles collected in step 1, 2, 4 and 5. Altogether we collected 400 gang member's Twitter profiles. This is a large number compared to previous studies of gang member activities on social media that curated a maximum of 91 profiles BIBREF10 . Moreover, we believe the profiles collected represent a diverse set of gang members that are not biased toward a particular geographic area or lingo as our data collection process used location-independent terms proven to be used by gang members when they express themselves.
Data analysis
We next explore differences between gang and non-gang member Twitter usage to find promising features for classifying profiles. For this purpose, profiles of non-gang members were collected from the Twitter Streaming API. We collected a random sample of tweets and the profiles of the users who authored the tweets in the random sample. We manually verified that all Twitter profiles collected in this approach belong to non-gang members. The profiles selected were then filtered by location to remove non-U.S. profiles by reverse geo-coding the location stated in their profile description by the Google Maps API. Profiles with location descriptions that were unspecified or did not relate to a location in the U.S. were discarded. We collected 2,000 non-gang member profiles in this manner. In addition, we added 865 manually verified non-gang member profiles collected using the location neutral keywords discussed in Section SECREF3 . Introducing these profiles, which have some characteristics of gang members (such as cursing frequently or cursing at law enforcement) but are not, captures local languages used by family/friends of gang members and ordinary people in a neighborhood where gangs operate.
With the Twitter REST API, we collected the maximum number of most recent tweets that can be retrieved (3,200) along with profile descriptions and images (profile and cover photos) of every gang and non-gang member profile. The resulting dataset consists of 400 gang member Twitter profiles and 2,865 non-gang member Twitter profiles. The dataset has a total of 821,412 tweets from gang member profiles and 7,238,758 tweets from non-gang member profiles. Prior to analyzing any text content, we removed all of the seed words used to find gang member profiles, all stop words, and performed stemming across all tweets and profile descriptions.
Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification.
On Twitter, a user can give a self-description as a part of the user's profile. A comparison of the top 10 words in gang members' and non-gang members' Twitter profile descriptions is shown in Figure FIGREF21 . The first 10 words are the most frequently used words in non-gang members' profiles and the latter 10 words are the most frequently used words in gang members' profiles. Word comparison shows that gang members prefer to use curse words (nigga, fuck, shit) in their profile descriptions while non-gang members use words related to their feelings or interests (love, life, live, music, book). The terms rip and free which appear in approximately INLINEFORM0 of all gang member Twitter profiles, suggest that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members. The term gang in gang members' profile descriptions suggest that gang members like to self-identify themselves on Twitter. Such lexical features may therefore be of great importance for automatically identifying gang member profiles. We take counts of unigrams from gang and non-gang members' Twitter profile descriptions as classification features.
It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member.
Recognizing the frequency with which gang members post YouTube links on gangster rap and hip-hop, we consider the YouTube videos posted in a user's tweets as features for the classifier. In particular, for each YouTube video tweeted, we used the YouTube API to retrieve the video's description and its comments. Further analysis of YouTube data showed a difference between terms in gang members' YouTube data and non-gang members' YouTube data. For example, the top 5 terms (after stemming and stop word removal) used in YouTube videos shared by gang members are shit, like, nigga, fuck, lil while like, love, peopl, song, get are the top 5 terms in non-gang member video data. To represent a user profile based on their music interests, we generated a bag of words from the video descriptions and comments from all shared videos.
Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier.
In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier.
Learning algorithms
The unigrams of tweets, profile text, and linked YouTube video descriptions and comments, along with the distribution of emoji symbols and the profile image tags were used to train four different classification models: a Naive Bayes net, a Logistic Regression, a Random Forest, and a Support Vector Machine (SVM). These four models were chosen because they are known to perform well over text features, which is the dominant type of feature considered. The performance of the models are empirically compared to determine the most suitable classification technique for this problem. Data for the models are represented as a vector of term frequencies where the terms were collected from one or more feature sets described above.
Evaluation
We next evaluate the performance of classifiers that use the above features to discover gang member profiles on Twitter. For this purpose, we use the training set discussed in Section SECREF3 with 400 gang member profiles (the `positive'/`gang' class) and 2,865 non-gang member profiles (the `negative'/`non-gang' class). We trained and evaluated the performance of the classifiers mentioned in Section SECREF31 under a 10-fold cross validation scheme. For each of the four learning algorithms, we consider variations involving only tweet text, emoji, profile, image, or music interest (YouTube comments and video description) features, and a final variant that considers all types of features together. The classifiers that use a single feature type were intended to help us study the quality of their predictive power by itself. When building these single-feature classifiers, we filtered the training dataset based on the availability of the single feature type in the training data. For example, we only used the twitter profiles that had at least a single emoji in their tweets to train classifiers that consider emoji features. We found 3,085 such profiles out of the 3,265 profiles in the training set. When all feature types were considered, we developed two different models:
Because a Twitter profile may not have every feature type, Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. In this model, the non-occurrence of a feature is represented by `zeroing out' the feature value during model training. Model(2) represents the ideal scenario where all profiles contain every feature type. For this model, we used 1,358 training instances (42% of all training instances), out of which 172 were gang members (43% of all gang members) and 1,186 were non-gang members (41% of all non-gang members). We used version 0.17.1 of scikit-learn machine learning library to implement the classifiers.
For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' and `non-gang' classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the positive `gang' and negative `non-gang' classes separately because of class imbalance in our dataset.
Experimental results
Table TABREF37 presents the average precision, recall, and INLINEFORM0 -score over the 10 folds for the single-feature and combined feature classifiers. The table includes, in braces (`{ }'), the number of gang and non-gang profiles that contain a particular feature type, and hence the number of profiles used for the 10-fold cross validation. It is reasonable to expect that any Twitter profile is not that of a gang member, predicting a Twitter user as a non-gang member is much easier than predicting a Twitter user as a gang member. Moreover false positive classifications of the `gang' class may be detrimental to law enforcement investigations, which may go awry as they surveil an innocent person based on the classifier's suggestion. We thus believe that a small false positive rate of the `gang' class to be an especially important evaluation metric. We say that a classifier is `ideal' if it demonstrates high precision, recall, and INLINEFORM1 -score for the `gang' class while performing well on the `non-gang' class as well.
The best performing classifier that considers single features is a Random Forest model over tweet features (T), with a reasonable INLINEFORM0 -score of 0.7229 for the `gang' class. It also features the highest INLINEFORM1 -score for the `non-gang' class (0.9671). Its strong performance is intuitive given the striking differences in language as shown in Figure FIGREF14 and discussed in Section UID22 . We also noted that music features offer promising results, with an INLINEFORM2 -score of 0.6505 with a Naive Bayes classifier, as well as emoji features with an INLINEFORM3 -score of 0.6067 also achieved by a Naive Bayes classifier. However, the use of profile data and image tags by themselves yield relatively poor INLINEFORM4 -scores no matter which classifier considered. There may be two reasons for this despite the differences we observed in Section SECREF17 . First, these two feature types did not generate a large number of specific features for learning. For example, descriptions are limited to just 160 characters per profile, leading to a limited number of unigrams (in our dataset, 10 on average) that can be used to train the classifiers. Second, the profile images were tagged by a third party Web service which is not specifically designed to identify gang hand signs, drugs and guns, which are often shared by gang members. This led to a small set of image tags in their profiles that were fairly generic, i.e., the image tags in Figure FIGREF36 such as `people', `man', and `adult'.
Combining these diverse sets of features into a single classifier yields even better results. Our results for Model(1) show that the Random Forest achieves the highest INLINEFORM0 -scores for both `gang' (0.7364) and `non-gang' (0.9690) classes and yields the best precision of 0.8792, which corresponds to a low false positive rate when labeling a profile as a gang member. Despite the fact that it has lower positive recall compared to the second best performing classifier (a Random Forest trained over only tweet text features (T)), for this problem setting, we should be willing to increase the chance that a gang member will go unclassified if it means reducing the chance of applying a `gang' label to a non-gang member. When we tested Model(2), a Random Forrest classifier achieved an INLINEFORM1 -score of 0.7755 (improvement of 7.28% with respect to the best performing single feature type classifier (T)) for `gang' class with a precision of 0.8961 (improvement of 6.26% with respect to (T)) and a recall of 0.6994 (improvement of 9.26% with respect to (T)). Model(2) thus outperforms Model(1), and we expect its performance to improve with the availability of more training data with all feature types.
px
Evaluation Over Unseen Profiles
We also tested the trained classifiers using a set of Twitter profiles from a separate data collection process that may emulate the classifier's operation in a real-time setting. For this experiment, we captured real-time tweets from Los Angeles, CA and from ten South Side, Chicago neighborhoods that are known for gang-related activities BIBREF10 using the Twitter streaming API. We consider these areas with known gang presence on social media to ensure that some positive profiles would appear in our test set. We ultimately collected 24,162 Twitter profiles: 15,662 from Los Angeles, and 8,500 from Chicago. We populated data for each profile by using the 3,200 most recent tweets (the maximum that can be collected from Twitter's API) for each profile. Since the 24,162 profiles are far too many to label manually, we qualitatively study those profiles the classifier placed into the `gang' class.
We used the training dataset to train our best performing random forest classifier (which use all feature types) and tested it on the test dataset. We then analyzed the Twitter profiles that our classifier labeled as belonging to the `gang' class. Each of those profiles had several features which overlap with gang members such as displaying hand signs and weapons in their profile images or in videos posted by them, gang names or gang-related hashtags in their profile descriptions, frequent use of curse words, and the use of terms such as “my homie" to refer to self-identified gang members. Representative tweets extracted from those profiles are depicted in Figure FIGREF41 . The most frequent words found in tweets from those profiles were shit, nigga, got, bitch, go, fuck etc. and their user profiles had terms such as free, artist, shit, fuck, freedagang, and ripthefallen. They had frequently used emojis such as face with tears of joy, hundred points symbol, fire, skull, money bag, and pistol. For some profiles, it was less obvious that the classifier correctly identified a gang member. Such profiles used the same emojis and curse words commonly found in gang members profiles, but their profile picture and tweet content was not indicative of a gang affiliation. In conclusion, we find that in a real-time-like setting, the classifier to be able to extract profiles with features that strongly suggest gang affiliation. Of course, these profiles demand further investigation and extensive evidence from other sources in order to draw a concrete conclusion, especially in the context of a law enforcement investigation. We refrain from reporting any profile names or specific details about the profiles labeled as a `gang' member to comply with the applicable IRB governing this human subject research.
px
Conclusion and Future Work
This paper presented an approach to address the problem of automatically identifying gang member profiles on Twitter. Despite the challenges in developing such automated systems, mainly due to difficulties in finding online gang member profiles for developing training datasets, we proposed an approach that uses features extracted from textual descriptions, emojis, images and videos shared on Twitter (textual features extracted from images, and videos). Exploratory analysis of these types of features revealed interesting, and sometimes striking differences in the ways gang and non-gang members use Twitter. Classifiers trained over features that highlight these differences, were evaluated under 10-fold cross validation. Our best classifier achieved a promising INLINEFORM0 -score of 0.7755 over the `gang' profiles when all types of features were considered.
Future work will strengthen our training dataset by including more gang member Twitter profiles by searching for more location-independent keywords. We also plan to develop our own image classification system specifically designed to classify images found on gang member profiles. We would also like to experiment with building dictionaries that contain gang names to understand whether “having a gang name in the profile description” as a feature can improve our results. Finally, we would also like to study how can we further improve our classifier models using word embeddings BIBREF23 and social networks of known gang members.
px
Acknowledgement
We are thankful to Uday Kiran Yeda for helping us with data collection. We acknowledge partial support from the National Science Foundation (NSF) award: CNS-1513721: “Context-Aware Harassment Detection on Social Media”, National Institutes of Health (NIH) award: MH105384-01A1: “Modeling Social Behavior for Healthcare Utilization in Depression” and Grant No. 2014-PS-PSN-00006 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the U.S. Department of Justice's Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justice and Delinquency Prevention, the Office for Victims of Crime, and the SMART Office. Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice, NSF or NIH.
px | Manual verification |
32a232310babb92991c4b1b75f7aa6b4670ec447 | 32a232310babb92991c4b1b75f7aa6b4670ec447_0 | Q: Do the authors provide evidence that 'most' street gang members use Twitter to intimidate others?
Text: Introduction and Motivation
The crime and violence street gangs introduce into neighborhoods is a growing epidemic in cities around the world. Today, over 1.23 million people in the United States are members of a street gang BIBREF0 , BIBREF1 , which is a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise BIBREF2 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Moreover, data from the Centers for Disease Control in the United States suggests that the victims of at least 1.3% of all gang-related homicides are merely innocent bystanders who live in gang occupied neighborhoods BIBREF3 .
Street gang members have established online presences coinciding with their physical occupation of neighborhoods. The National Gang Threat Assessment Report confirms that at least tens of thousands of gang members are using social networking websites such as Twitter and video sharing websites such as YouTube in their daily life BIBREF0 . They are very active online; the 2007 National Assessment Center's survey of gang members found that 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF4 . Gang members typically use social networking sites and social media to develop online respect for their street gang BIBREF5 and to post intimidating, threatening images or videos BIBREF6 . This “Cyber-” or “Internet banging” BIBREF7 behavior is precipitated by the fact that an increasing number of young members of the society are joining gangs BIBREF8 , and these young members have become enamored with technology and with the notion of sharing information quickly and publicly through social media. Stronger police surveillance in the physical spaces where gangs congregate further encourages gang members to seek out virtual spaces such as social media to express their affiliation, to sell drugs, and to celebrate their illegal activities BIBREF9 .
Gang members are able to post publicly on Twitter without fear of consequences because there are few tools law enforcement can use to surveil this medium BIBREF10 . Police departments across the United States instead rely on manual processes to search social media for gang member profiles and to study their posts. For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF11 . Officer training is broadly limited to understanding policies on using Twitter in investigations and best practices for data storage BIBREF12 . The safety and security of city neighborhoods can thus be improved if law enforcement were equipped with intelligent tools to study social media for gang activity.
The need for better tools for law enforcement cannot be underscored enough. Recent news reports have shown that many incidents involving gangs start on Twitter, escalate over time, and lead to an offline event that could have been prevented by an early warning. For example, the media reported on a possible connection between the death of a teenage rapper from Illinois and the final set of tweets he posted. One of his last tweets linked to a video of him shouting vulgar words at a rival gang member who, in return, replied “I'ma kill you” on social media. In a following tweet, the teenage rapper posted “im on 069”, revealing his location, and was shot dead soon after that post. Subsequent investigation revealed that the rivalry leading to his death began and was carried out entirely on social media. Other reporting has revealed how innocent bystanders have also become targets in online fights, leaving everyone in a neighborhood at risk.
This paper investigates whether gang member profiles can be identified automatically on Twitter, which can enable better surveillance of gang members on social media. Classifying Twitter profiles into particular types of users has been done in other contexts BIBREF13 , BIBREF14 , BIBREF15 , but gang member profiles pose unique challenges. For example, many Twitter profile classifiers search for contextual clues in tweets and profile descriptions BIBREF16 , but gang member profiles use a rapidly changing lexicon of keywords and phrases that often have only a local, geographic context. This is illustrated in Figure FIGREF6 , which shows the Twitter profile descriptions of two verified deceased gang members. The profile of @OsoArrogantJoJo provides evidence that he belongs to a rival gang of the Black Disciples by #BDK, a hashtag that is only known to those involved with gang culture in Chicago. @PappyNotPapi's profile mentions #PBG and our investigations revealed that this hashtag is newly founded and stands for the Pooh Bear Gang, a gang that was formerly known as the Insane Cutthroat Gangsters. Given the very local, rapidly changing lexicon of gang members on social media, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, this study proposes heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music culture. A large set of gang member profiles, obtained through a careful data collection process, is compared against non-gang member profiles to find contrasting features. Experimental results show that using these sets of features, we can build a classifier that has a low false positive rate and a promising INLINEFORM0 -score of 0.7755.
This paper is organized as follows. Section SECREF2 discusses the related literature and positions how this work differs from other related works. Section SECREF3 discusses the data collection, manual feature selection and our approach to identify gang member profiles. Section SECREF4 gives a detailed explanation for evaluation of the proposed method and the results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Gang violence is a well studied social science topic dating back to 1927 BIBREF17 . However, the notions of “Cyber-” or “Internet banging”, which is defined as “the phenomenon of gang affiliates using social media sites to trade insults or make violent threats that lead to homicide or victimization” BIBREF7 , was only recently introduced BIBREF18 , BIBREF10 . Patton et al. introduced the concept of “Internet banging” and studied how social media is now being used as a tool for gang self-promotion and as a way for gang members to gain and maintain street credibility BIBREF7 . They also discussed the relationship between gang-related crime and hip-hop culture, giving examples on how hip-hop music shared on social media websites targeted at harassing rival gang members often ended up in real-world collisions among those gangs. Decker et al. and Patton et al. have also reported that street gangs perform Internet banging with social media posts of videos depicting their illegal behaviors, threats to rival gangs, and firearms BIBREF19 , BIBREF20 .
The ability to take action on these discoveries is limited by the tools available to discover gang members on social media and to analyze the content they post BIBREF18 . Recent attempts to improve our abilities include a proposed architecture for a surveillance system that can learn the structure, function, and operation of gangs through what they post on social media BIBREF10 . However, the architecture requires a set of gang member profiles for input, thus assuming that they have already been discovered. Patton et al. BIBREF20 devised a method to automatically collect tweets from a group of gang members operating in Detroit, MI. However, their approach required the profile names of the gang members to be known beforehand, and data collection was localized to a single city in the country.
This work builds upon existing methods to automatically discover gang member profiles on Twitter. This type of user profile classification problem has been explored in a diverse set of applications such as political affiliation BIBREF13 , ethnicity BIBREF13 , gender BIBREF15 , predicting brand loyalty BIBREF13 , and user occupations BIBREF16 . However, these approaches may utilize an abundance of positive examples in their training data, and only rely on a single feature type (typically, tweet text). Whereas most profile classifiers focus on a single type of feature (e.g. profile text), we consider the use of a variety of feature types, including emoji, YouTube links, and photo features.
Discovering Gang Member Profiles
This section discusses the methodology we followed to study and classify the Twitter profiles of gang members automatically. It includes a semi-automatic data collection process to discover a large set of verifiable gang member profiles, an evaluation of the tweets of gang and non-gang member posts to identify promising features, and the deployment of multiple supervised learning algorithms to perform the classification.
Data collection
Discovering gang member profiles on Twitter to build training and testing datasets is a challenging task. Past strategies to find these profiles were to search for keywords, phrases, and events that are known to be related to gang activity in a particular city a priori BIBREF10 , BIBREF20 . However, such approaches are unlikely to yield adequate data to train an automatic classifier since gang members from different geographic locations and cultures use local languages, location-specific hashtags, and share information related to activities in a local region BIBREF10 . Such region-specific tweets and profiles may be used to train a classifier to find gang members within a small region but not across the Twitterverse. To overcome these limitations, we adopted a semi-automatic workflow, illustrated in Figure FIGREF7 , to build a dataset of gang member profiles suitable for training a classifier. The steps of the workflow are:
1. Seed Term Discovery: Following the success of identifying gang member profiles from Chicago BIBREF10 , we began our data collection with discovering universal terms used by gang members. We first searched for profiles with hashtags for Chicago gangs noted in BIBREF10 , namely #BDK (Black Disciple Killers) and #GDK (Gangster Disciples Killers). Those profiles were analyzed and manually verified as explained in Step 3. Analysis of these profiles identified a small set of hashtags they all use in their profile descriptions. Searching Twitter profiles using those hashtags, we observed that gang members across the U.S. use them, thus we consider those terms to be location neutral. For example, gang members post #FreeDaGuys in their profile to support their fellow members who are in jail, #RIPDaGuys to convey the grieving for fallen gang members, and #FuckDaOpps to show their hatred towards police officers. We used these terms as keywords to discover Twitter profiles irrespective of geographical location. We used the Followerwonk Web service API and Twitter REST API to search Twitter profile descriptions by keywords #FreeDaGuys, #FreeMyNigga, #RIPDaGuys, and #FuckDaOpps. Since there are different informal ways people spell a word in social media, we also considered variations on the spelling of each keyword; for example, for #FreeDaGuys, we searched both #FreeDaGuys, and #FreeTheGuys.
2. Gang Affiliated Rappers' Twitter Profile Discovery: Finding profiles by a small set of keywords is unlikely to yield sufficient data. Thus, we sought additional gang member profiles with an observation from Patton et al. BIBREF7 that the influence of hip-hop music and culture on offline gang member activities can also be seen in their social media posts. We thus also consider the influence of hip-hop culture on Twitter by exploring the Twitter network of known gangster rappers who were murdered in 2015 due to gang-related incidents. We searched for these rapper profiles on Twitter and manually checked that the rapper was affiliated to a gang.
3. Manual verification of Twitter profiles: We verified each profile discovered manually by examining the profile picture, profile background image, recent tweets, and recent pictures posted by a user. During these checks, we searched for terms, activities, and symbols that we believed could be associated with a gang. For example, profiles whose image or background included guns in a threatening way, stacks of money, showing gang hand signs and gestures, and humans holding or posing with a gun, appeared likely to be from a gang member. Such images were often identified in profiles of users who submitted tweets that contain messages of support or sadness for prisoners or recently fallen gang members, or used a high volume of threatening and intimidating slang language. Only profiles where the images, words, and tweets all suggested gang affiliation were labeled as gang affiliates and added to our dataset. Although this manual verification does have a degree of subjectivity, in practice, the images and words used by gang members on social media are so pronounced that we believe any reasonable analyst would agree that they are gang members. We found that not all the profiles collected belonged to gang members; we observed relatives and followers of gang members posting the same hashtags as in Step 1 to convey similar feelings in their profile descriptions.
4. Using Retweets to discover more profiles: From the set of verified profiles, we explored their retweet and follower networks as a way to expand the dataset. We first considered authors of tweets which were retweeted by a gang member in our seed set. In Twitter, “retweeting” is a mechanism by which a user can share someone else's tweet to their follower audience. Assuming that a user only retweets things that they believe or their audience would be interested in, it may be reasonable to assume that gang members would only be interested in sharing what other gang members have to say, and hence, the authors of gang members' retweets could also be gang members.
5. Using Followers and Followees to discover more profiles: We analyzed followers and followees of our seed gang member profiles to find more gang member profiles. A Twitter user can follow other Twitter users so that the individual will be subscribed to their tweets as a follower and they will be able to start a private conversation by sending direct messages to the individual. Motivated by the sociological concept of homophily, which claims that individuals have a tendency to associate and bond with similar others, we hypothesized that the followers and followees of Twitter profiles from the seed set may also be gang members. Manual verification of Twitter profiles collected from retweets, followers, and followees of gang members showed that a majority of those profiles are non-gang members who are either family members, hip-hop artists, women or profiles with pornographic content. To ensure that our dataset is not biased towards a specific gang or geographic location, only a limited number of profiles were collected via retweets, followers and followees.
Table TABREF8 summarizes the number of profiles manually verified as gang members from Twitter profiles collected in step 1, 2, 4 and 5. Altogether we collected 400 gang member's Twitter profiles. This is a large number compared to previous studies of gang member activities on social media that curated a maximum of 91 profiles BIBREF10 . Moreover, we believe the profiles collected represent a diverse set of gang members that are not biased toward a particular geographic area or lingo as our data collection process used location-independent terms proven to be used by gang members when they express themselves.
Data analysis
We next explore differences between gang and non-gang member Twitter usage to find promising features for classifying profiles. For this purpose, profiles of non-gang members were collected from the Twitter Streaming API. We collected a random sample of tweets and the profiles of the users who authored the tweets in the random sample. We manually verified that all Twitter profiles collected in this approach belong to non-gang members. The profiles selected were then filtered by location to remove non-U.S. profiles by reverse geo-coding the location stated in their profile description by the Google Maps API. Profiles with location descriptions that were unspecified or did not relate to a location in the U.S. were discarded. We collected 2,000 non-gang member profiles in this manner. In addition, we added 865 manually verified non-gang member profiles collected using the location neutral keywords discussed in Section SECREF3 . Introducing these profiles, which have some characteristics of gang members (such as cursing frequently or cursing at law enforcement) but are not, captures local languages used by family/friends of gang members and ordinary people in a neighborhood where gangs operate.
With the Twitter REST API, we collected the maximum number of most recent tweets that can be retrieved (3,200) along with profile descriptions and images (profile and cover photos) of every gang and non-gang member profile. The resulting dataset consists of 400 gang member Twitter profiles and 2,865 non-gang member Twitter profiles. The dataset has a total of 821,412 tweets from gang member profiles and 7,238,758 tweets from non-gang member profiles. Prior to analyzing any text content, we removed all of the seed words used to find gang member profiles, all stop words, and performed stemming across all tweets and profile descriptions.
Figure FIGREF14 summarizes the words seen most often in the gang and non-gang members' tweets as clouds. They show a clear difference in language. For example, we note that gang members more frequently use curse words in comparison to ordinary users. Although cursing is frequent in tweets, they represent just 1.15% of all words used BIBREF21 . In contrast, we found 5.72% of all words posted by gang member accounts to be classified as a curse word, which is nearly five times more than the average curse word usage on Twitter. The clouds also reflect the fact that gang members often talk about drugs and money with terms such as smoke, high, hit, and money, while ordinary users hardly speak about finances and drugs. We also noticed that gang members talk about material things with terms such as got, money, make, real, need whereas ordinary users tend to vocalize their feelings with terms such as new, like, love, know, want, look, make, us. These differences make it clear that the individual words used by gang and non-gang members will be relevant features for gang profile classification.
On Twitter, a user can give a self-description as a part of the user's profile. A comparison of the top 10 words in gang members' and non-gang members' Twitter profile descriptions is shown in Figure FIGREF21 . The first 10 words are the most frequently used words in non-gang members' profiles and the latter 10 words are the most frequently used words in gang members' profiles. Word comparison shows that gang members prefer to use curse words (nigga, fuck, shit) in their profile descriptions while non-gang members use words related to their feelings or interests (love, life, live, music, book). The terms rip and free which appear in approximately INLINEFORM0 of all gang member Twitter profiles, suggest that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members. The term gang in gang members' profile descriptions suggest that gang members like to self-identify themselves on Twitter. Such lexical features may therefore be of great importance for automatically identifying gang member profiles. We take counts of unigrams from gang and non-gang members' Twitter profile descriptions as classification features.
It has been recognized that music is a key cultural component in an urban lifestyle and that gang members often want to emulate the scenarios and activities the music conveys BIBREF7 . Our analysis confirms that the influence of gangster rap is expressed in gang members' Twitter posts. We found that 51.25% of the gang members collected have a tweet that links to a YouTube video. Following these links, a simple keyword search for the terms gangsta and hip-hop in the YouTube video description found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre. Moreover, this high proportion is not driven by a small number of profiles that prolifically share YouTube links; eight YouTube links are shared on average by a gang member.
Recognizing the frequency with which gang members post YouTube links on gangster rap and hip-hop, we consider the YouTube videos posted in a user's tweets as features for the classifier. In particular, for each YouTube video tweeted, we used the YouTube API to retrieve the video's description and its comments. Further analysis of YouTube data showed a difference between terms in gang members' YouTube data and non-gang members' YouTube data. For example, the top 5 terms (after stemming and stop word removal) used in YouTube videos shared by gang members are shit, like, nigga, fuck, lil while like, love, peopl, song, get are the top 5 terms in non-gang member video data. To represent a user profile based on their music interests, we generated a bag of words from the video descriptions and comments from all shared videos.
Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier.
In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier.
Learning algorithms
The unigrams of tweets, profile text, and linked YouTube video descriptions and comments, along with the distribution of emoji symbols and the profile image tags were used to train four different classification models: a Naive Bayes net, a Logistic Regression, a Random Forest, and a Support Vector Machine (SVM). These four models were chosen because they are known to perform well over text features, which is the dominant type of feature considered. The performance of the models are empirically compared to determine the most suitable classification technique for this problem. Data for the models are represented as a vector of term frequencies where the terms were collected from one or more feature sets described above.
Evaluation
We next evaluate the performance of classifiers that use the above features to discover gang member profiles on Twitter. For this purpose, we use the training set discussed in Section SECREF3 with 400 gang member profiles (the `positive'/`gang' class) and 2,865 non-gang member profiles (the `negative'/`non-gang' class). We trained and evaluated the performance of the classifiers mentioned in Section SECREF31 under a 10-fold cross validation scheme. For each of the four learning algorithms, we consider variations involving only tweet text, emoji, profile, image, or music interest (YouTube comments and video description) features, and a final variant that considers all types of features together. The classifiers that use a single feature type were intended to help us study the quality of their predictive power by itself. When building these single-feature classifiers, we filtered the training dataset based on the availability of the single feature type in the training data. For example, we only used the twitter profiles that had at least a single emoji in their tweets to train classifiers that consider emoji features. We found 3,085 such profiles out of the 3,265 profiles in the training set. When all feature types were considered, we developed two different models:
Because a Twitter profile may not have every feature type, Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. In this model, the non-occurrence of a feature is represented by `zeroing out' the feature value during model training. Model(2) represents the ideal scenario where all profiles contain every feature type. For this model, we used 1,358 training instances (42% of all training instances), out of which 172 were gang members (43% of all gang members) and 1,186 were non-gang members (41% of all non-gang members). We used version 0.17.1 of scikit-learn machine learning library to implement the classifiers.
For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' and `non-gang' classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the positive `gang' and negative `non-gang' classes separately because of class imbalance in our dataset.
Experimental results
Table TABREF37 presents the average precision, recall, and INLINEFORM0 -score over the 10 folds for the single-feature and combined feature classifiers. The table includes, in braces (`{ }'), the number of gang and non-gang profiles that contain a particular feature type, and hence the number of profiles used for the 10-fold cross validation. It is reasonable to expect that any Twitter profile is not that of a gang member, predicting a Twitter user as a non-gang member is much easier than predicting a Twitter user as a gang member. Moreover false positive classifications of the `gang' class may be detrimental to law enforcement investigations, which may go awry as they surveil an innocent person based on the classifier's suggestion. We thus believe that a small false positive rate of the `gang' class to be an especially important evaluation metric. We say that a classifier is `ideal' if it demonstrates high precision, recall, and INLINEFORM1 -score for the `gang' class while performing well on the `non-gang' class as well.
The best performing classifier that considers single features is a Random Forest model over tweet features (T), with a reasonable INLINEFORM0 -score of 0.7229 for the `gang' class. It also features the highest INLINEFORM1 -score for the `non-gang' class (0.9671). Its strong performance is intuitive given the striking differences in language as shown in Figure FIGREF14 and discussed in Section UID22 . We also noted that music features offer promising results, with an INLINEFORM2 -score of 0.6505 with a Naive Bayes classifier, as well as emoji features with an INLINEFORM3 -score of 0.6067 also achieved by a Naive Bayes classifier. However, the use of profile data and image tags by themselves yield relatively poor INLINEFORM4 -scores no matter which classifier considered. There may be two reasons for this despite the differences we observed in Section SECREF17 . First, these two feature types did not generate a large number of specific features for learning. For example, descriptions are limited to just 160 characters per profile, leading to a limited number of unigrams (in our dataset, 10 on average) that can be used to train the classifiers. Second, the profile images were tagged by a third party Web service which is not specifically designed to identify gang hand signs, drugs and guns, which are often shared by gang members. This led to a small set of image tags in their profiles that were fairly generic, i.e., the image tags in Figure FIGREF36 such as `people', `man', and `adult'.
Combining these diverse sets of features into a single classifier yields even better results. Our results for Model(1) show that the Random Forest achieves the highest INLINEFORM0 -scores for both `gang' (0.7364) and `non-gang' (0.9690) classes and yields the best precision of 0.8792, which corresponds to a low false positive rate when labeling a profile as a gang member. Despite the fact that it has lower positive recall compared to the second best performing classifier (a Random Forest trained over only tweet text features (T)), for this problem setting, we should be willing to increase the chance that a gang member will go unclassified if it means reducing the chance of applying a `gang' label to a non-gang member. When we tested Model(2), a Random Forrest classifier achieved an INLINEFORM1 -score of 0.7755 (improvement of 7.28% with respect to the best performing single feature type classifier (T)) for `gang' class with a precision of 0.8961 (improvement of 6.26% with respect to (T)) and a recall of 0.6994 (improvement of 9.26% with respect to (T)). Model(2) thus outperforms Model(1), and we expect its performance to improve with the availability of more training data with all feature types.
px
Evaluation Over Unseen Profiles
We also tested the trained classifiers using a set of Twitter profiles from a separate data collection process that may emulate the classifier's operation in a real-time setting. For this experiment, we captured real-time tweets from Los Angeles, CA and from ten South Side, Chicago neighborhoods that are known for gang-related activities BIBREF10 using the Twitter streaming API. We consider these areas with known gang presence on social media to ensure that some positive profiles would appear in our test set. We ultimately collected 24,162 Twitter profiles: 15,662 from Los Angeles, and 8,500 from Chicago. We populated data for each profile by using the 3,200 most recent tweets (the maximum that can be collected from Twitter's API) for each profile. Since the 24,162 profiles are far too many to label manually, we qualitatively study those profiles the classifier placed into the `gang' class.
We used the training dataset to train our best performing random forest classifier (which use all feature types) and tested it on the test dataset. We then analyzed the Twitter profiles that our classifier labeled as belonging to the `gang' class. Each of those profiles had several features which overlap with gang members such as displaying hand signs and weapons in their profile images or in videos posted by them, gang names or gang-related hashtags in their profile descriptions, frequent use of curse words, and the use of terms such as “my homie" to refer to self-identified gang members. Representative tweets extracted from those profiles are depicted in Figure FIGREF41 . The most frequent words found in tweets from those profiles were shit, nigga, got, bitch, go, fuck etc. and their user profiles had terms such as free, artist, shit, fuck, freedagang, and ripthefallen. They had frequently used emojis such as face with tears of joy, hundred points symbol, fire, skull, money bag, and pistol. For some profiles, it was less obvious that the classifier correctly identified a gang member. Such profiles used the same emojis and curse words commonly found in gang members profiles, but their profile picture and tweet content was not indicative of a gang affiliation. In conclusion, we find that in a real-time-like setting, the classifier to be able to extract profiles with features that strongly suggest gang affiliation. Of course, these profiles demand further investigation and extensive evidence from other sources in order to draw a concrete conclusion, especially in the context of a law enforcement investigation. We refrain from reporting any profile names or specific details about the profiles labeled as a `gang' member to comply with the applicable IRB governing this human subject research.
px
Conclusion and Future Work
This paper presented an approach to address the problem of automatically identifying gang member profiles on Twitter. Despite the challenges in developing such automated systems, mainly due to difficulties in finding online gang member profiles for developing training datasets, we proposed an approach that uses features extracted from textual descriptions, emojis, images and videos shared on Twitter (textual features extracted from images, and videos). Exploratory analysis of these types of features revealed interesting, and sometimes striking differences in the ways gang and non-gang members use Twitter. Classifiers trained over features that highlight these differences, were evaluated under 10-fold cross validation. Our best classifier achieved a promising INLINEFORM0 -score of 0.7755 over the `gang' profiles when all types of features were considered.
Future work will strengthen our training dataset by including more gang member Twitter profiles by searching for more location-independent keywords. We also plan to develop our own image classification system specifically designed to classify images found on gang member profiles. We would also like to experiment with building dictionaries that contain gang names to understand whether “having a gang name in the profile description” as a feature can improve our results. Finally, we would also like to study how can we further improve our classifier models using word embeddings BIBREF23 and social networks of known gang members.
px
Acknowledgement
We are thankful to Uday Kiran Yeda for helping us with data collection. We acknowledge partial support from the National Science Foundation (NSF) award: CNS-1513721: “Context-Aware Harassment Detection on Social Media”, National Institutes of Health (NIH) award: MH105384-01A1: “Modeling Social Behavior for Healthcare Utilization in Depression” and Grant No. 2014-PS-PSN-00006 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the U.S. Department of Justice's Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justice and Delinquency Prevention, the Office for Victims of Crime, and the SMART Office. Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice, NSF or NIH.
px | No |
5845d1db7f819dbadb72e7df69d49c3f424b5730 | 5845d1db7f819dbadb72e7df69d49c3f424b5730_0 | Q: What is English mixed with in the TRAC dataset?
Text: Introduction
The exponential increase of interactions on the various social media platforms has generated the huge amount of data on social media platforms like Facebook and Twitter, etc. These interactions resulted not only positive effect but also negative effect over billions of people owing to the fact that there are lots of aggressive comments (like hate, anger, and bullying). These cause not only mental and psychological stress but also account deactivation and even suicideBIBREF1. In this paper we concentrate on problems related to aggressiveness.
The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:
Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, "Well said sonu..you have courage to stand against dadagiri of Muslims".
Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, "Dear India, stop playing with the emotions of your people for votes."
Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive.
The additional discussion on aggressiveness task can be found in Kaggle task , which just divided the task into two classes - i.e., presence or absence of aggression in tweets.
The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset.
The massive increase of the social media data rendered the manual methods of content moderation difficult and costly. Machine Learning and Deep Learning methods to identify such phenomena have attracted more attention to the research community in recent yearsBIBREF4.
Based on the current context, we can divide the problem into three sub-problems: (a) detection of aggression levels, (b) handling code-mixed data and (c) handling styles (due to differences in social media platforms and text entry rules/restrictions).
A lot of the previous approachesBIBREF5 have used an ensemble model for the task. For example, some of them uses ensemble of statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9 some used ensemble of statistical and deep learning modelsBIBREF10, BIBREF11, BIBREF12 some used ensemble of deep learning models BIBREF13. There are approaches which proposed unified architecture based on deep learningBIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 while some proposed unified statistical modelBIBREF7. Additionally, there are some approaches uses data augmentation either through translation or labeling external data to make the model generalize across domainsBIBREF14, BIBREF10, BIBREF7.
Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:
Deep-Text Learning. The goal is to learn long range associations, dependencies between regions of text, N-grams, key-patterns, topical information, and sequential dependencies.
Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term "NLP Features" to represent it in the entire paper.
Dual embedding based on FastText and Glove. This dual embedding helps in high vocabulary coverage and to capture the rare and partially incorrect words in the text (specially by FastText BIBREF20).
Our "Deep-text architecture" uses model averaging strategy with three different deep learning architectures. Model averaging belongs to the family of ensemble learning techniques that uses multiple models for the same problem and combines their predictions to produce a more reliable and consistent prediction accuracy BIBREF21. This is the simplest form of weighted average ensemble based predictionBIBREF22 where, each ensemble member contribute equally to predictions. Specifically in our case, three different models have been used. The following contains the intuition behind the selection of these three models:
Deep Pyramid CNN BIBREF23 being deeper helps to learn long range associations between temporal regions of text using two-view embeddings.
Disconnected RNN BIBREF24 is very helpful in encoding the sequential information with temporal key patterns in the text.
Pooled BiLSTM In this architecture the last hidden state of BiLSTM is concatenated with mean and max-pooled representation of the hidden states obtained over all the time steps of Bi-LSTM. The idea of using mean and max pooling layers together is taken from BIBREF25 to avoid the loss of information in longer sequences of texts and max-pooling is taken to capture the topical informationBIBREF26.
NLP Features In each of the individual models, the NLP features are concatenated with last hidden state before the softmax classification layer as meta-data. The main aim is to provide additional information to the deep learning network.
The intuition behind the NLP features are the following:
Emotion Sensor Dataset We have introduced to use of emotion sensor features, as a meta-data information. We have obtained the word sensor dataset from Kaggle. In this dataset each word is statistically classified into 7 distinct classes (Disgust, Surprise, Neutral, Anger, Sad, Happy and Fear) using Naive Bayes, based on sentences collected from twitter and blogs.
Controlled Topical Signals from Empath. Empath can analyse the text across 200 gold standard topics and emotions. Additionally, it uses neural embedding to draw connotation among words across more than 1.8 billion words. We have used only selected categories like violence, hate, anger, aggression, social media and dispute from 200 Empath categories useful for us unlikeBIBREF12 which takes 194 categories.
Emoticons frequently used on social media indicates the sense of sentenceBIBREF17, BIBREF19, BIBREF9.
Normalized frequency of POS tags According to BIBREF12, BIBREF11, BIBREF7, BIBREF15 POS Tags provide the degree of target aggressiveness. LikeBIBREF12, we have used only four tags (a) adjective (JJ, JJR, JJS), (b) adverb (RB, RBR, RBS), (c) verb (VB, VBD, VBG, VBN, VBP, VBZ) and (d) noun (NN, NNS, NNP, NNPS) (See Penn-Treebank POS Tags for abbreviations and the full list). The main reason behind the selection of these four tags is to just identify words related to persons, activities, quality, etc, in the text.
Sentiment polarity obtained from VADER Sentiment Analysis BIBREF27 (positive, negative and neutral) like used in BIBREF15, BIBREF10, BIBREF11, BIBREF7. It helps to demarcate aggressiveness with non-aggressiveness in the text.
The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper.
Related work
There are several works for aggression identification submitted at TRAC 2018 among them some approaches use the ensemble of multiple statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9. Similarly, some of the models likeBIBREF10, BIBREF11, BIBREF12 have used ensemble of statistical and deep learning models. In these models the statistical part of the model uses additional features from text analysis like parts-of-speech tags, punctuation, emotion, emoticon etc. Model like: BIBREF13 has used the ensemble of deep learning models based on majority voting.
Some other models like: BIBREF28, BIBREF12, BIBREF9 have used different models for Facebook and twitter. While approaches like:BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 have proposed unified architecture based on deep learning. Systems likeBIBREF14, BIBREF10, BIBREF7 have used data augmentation either through translation or labelling external data to make the model generalize across domains. While BIBREF7 has proposed a unified statistical model.
Among approaches likeBIBREF6 extracted features from TF-IDF of character n-grams whileBIBREF28 uses LSTM with pre-trained embeddings from FastText. BIBREF15 have used the BiLSTM based model and the SVM metaclassifier model for the Facebook and Twitter test sets, respectively. While BIBREF13 tried ensembling of CNN, LSTM, and BILSTM.
Some approaches like:BIBREF12 has used emotions frequency as one of the features, while some others use sentiment emotion as featureBIBREF11. Also,BIBREF17, BIBREF19 have converted emoticons to their description. BIBREF9 have used TF-IDF of emoticons per-class as one of the features. Compared to all these approaches, we have concentrated to capture multiple linguistic/pattern based relations, key-terms and key-patters (with their association in text) through a combination of deep learning architectures with model averaging. We have also used NLP features as additional features with our deep learning architecture, obtained from psycho-linguistic and basic linguistic features.
Methodology
In this section, we describe our system architecture for aggressiveness classifier. In section SECREF23 we describe data preprocessing applied on the input text before feeding it to each of the classification models. Section SECREF26 describes the computation of NLP features. In Sections SECREF30, SECREF34 and SECREF45 we have described the architecture of different deep learning models like Deep Pyramid CNN, Disconnected RNN and Pooled BiLSTM respectively. Finally, in Section SECREF49, we describe model averaging based classification model which combines the prediction probabilities from three deep learninig architectures discussed above. (see Figure FIGREF22. for block diagram of system architecture).
Methodology ::: Data Preprocessing
We consider the text to be well formatted before applying the text to the embedding layer. First, we detect non-English text(which are few) and translate all of them to English using Google Translate. Still, there is some code mixed words like "mc", "bc" and other English abbreviations and spelling errors like "nd" in place of "and", "u" in place of "you" causes deep learning model to confuse with sentences of the same meaning. We follow the strategy of preprocessor as inBIBREF17 to normalize the abbreviations and remove spelling errors, URLs and punctuation marks, converting emojis to their description.
https://spacy.io/usage/linguistic-features#pos-tagging
Methodology ::: NLP Features
We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).
Different from previous approachesBIBREF8, BIBREF12 where BIBREF12 have used Emotion features in the form of frequency while BIBREF8 have used emotion feature vector obtained from LIWC 2007BIBREF30. UnlikeBIBREF12 we have used only 6 topical signals from EmapthBIBREF29. We have borrowed the idea of using other features like punctuation features and parts-of-speech tags from BIBREF12. The Table 1. lists and describes features, tools used to obtain them and the number of features resulted from each type.
Methodology ::: Deep Pyramid CNN(DPCNN)
Since it has been proved that CNNs are great feature extractors for text classificationBIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF23 while deeper networks(whether RNNs or CNN's) has been proven for learning long-range association like deeper character level CNN'sBIBREF36, BIBREF37, and complex combination of RNN and CNNBIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42. Deep Pyramid CNN (DPCNN)BIBREF23 has 15 layers of word-level CNN's and contains similar pre-activation as proposed in improved ResnetBIBREF43. DPCNN outperforms the 32-layer character CNNBIBREF37 and Hierarchical attention networksBIBREF42 it has added advantage that due to its pyramid structure it does not require dimension matching in shortcut connections defined as z + h(z) as inBIBREF43 where h(z) represents the skipped layers essentially contains two convolutional layers with pre-activation. It uses enhanced region embedding which consumes pre-trained embeddings (in our case it is FastText+Glove based dual embedding).
Enhanced Region Embedding. The current DPCNNBIBREF23, uses two view type enhanced region embedding. For the text categorization, it defines a region of text as view-1 and its adjacent regions as view-2. Then using unlabeled data, it trains a neural network of one hidden layer with an artificial task of predicting view-2 from view-1. The obtained hidden layer, which is an embedding function that takes view-1 as input, serves as an unsupervised embedding function in the model for text categorization. The detailed architecture has been shown in Figure FIGREF29.
Let each word input $x_j \in R^d$ be the d-dimensional vector for the $j^{th}$ word $w_{j}$ and the sentence $s_i$ contains sequence of $n$ words $\lbrace w_{1},w_{2},w_{3},......,w_{n}\rbrace $ as shown in Figure FIGREF29. In comparision to conventional convolution layer, DPCNN proposes to use pre-activation, thus essentially the convolutional layer of DPCNN is $\textbf {W}\sigma (\textbf {x})+\textbf {b}$, where $\textbf {W}$ and $\textbf {b}$(unique to each layer) are the weights matrix and bias respectively, we use $\sigma $ as PReLUBIBREF44. During implementation we use kernel size of 3(represented by $\textbf {x}$ to denote the small overlapping regions of text.), The number of filters(number of feature maps denoted by the number of rows of $\textbf {W}$) is 128 as depicted in Figure FIGREF29. With the number of filters same in each convolution layer and max-pooling with stride 2 makes the computation time halved, and doubles the net coverage of convolution kernel. Thus the deeper layers cause to learn long-range associations between regions of text. Let's say $h_{dpcnn} \in R^{p_1}$ be the hidden state obtained from DPCNN just before the classification layer and $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Lets $z_1 \in R^{p_1 + 24}$ be another hidden state obtained as
where, $\oplus $ denotes concatenation. The vector $z_1$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i1}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{dpcnn}$ and $b_{dpcnn}$ are the weight matrix and bias respectively.
Methodology ::: Disconnected RNN(DRNN)
Given a sequence $s_i = [x_{1}, x_{2}, x_{3},....x_{n}]$ where $x_{j} \in R^d$ represents the d-dimensional word vector for word $w_{j}$ and $n$ is the length of input text applied to a variant of RNN called Long Short-Term Memory (LSTM)BIBREF45 as shown in Figure FIGREF33. It is widely used for sequential modelling with long-term dependencies. For sequence modelling it keeps on updating the memory cell with current input using an adaptive gating mechanism. At time step $t$ the memory $c_t$ and the hidden state $h_t$ are updated as follows:
where $\hat{c}_t$ is the current cell state obtained from current input $x_t$ and previous hidden state $h_{t-1}$, $i_t$, $f_t$ and $o_t$ are the activation corresponding to input gate, forget gate and output gate respectively, $\sigma $ denotes the logistic sigmoid function and $\odot $ denotes the element-wise multiplication. Hence the hidden state representation at time step $t$ depends on all the previous input vectors given as
Specifically we have used Bi-directional LSTM BIBREF45 to capture both past and future context. It provides $h_t$ from both directions(forward & backward). The forward LSTM takes the natural order of words from $x_{1}$ to $x_{n}$ to obtain $\overrightarrow{h_t}$, while backward-LSTM $x_{n}$ to $x_{1}$ to obtain $\overleftarrow{h_t}$. then $h_t$ is calculated as
where $\oplus $ is the concatenation and $L$ is the size for one-directional LSTM. Therefore we denote the hidden state in equation DISPLAY_FORM37 with BiLSTM as
To avoid handling of long sequence and to capture local information for each word we define the window size $k$ for each word such that the BiLSTM only sees the the previous $k-1$ words with the current word, where $k$ is a hyperparameterBIBREF24. We use padding <PAD> to make the slices of fixed size k(as shown in Figure FIGREF33). It provides each hidden state $h_t$ with sequence of $k$ previous words. Since the phrase of $k$ words can lie anywhere in the text it helps to model the position invariant phrase representation due to which the it identifies key phrases important for identifying particular category. In this case, the equation of $h_t$ is given as
The output hidden vectors, $H = [h_1, h_2, h_3, ...... h_n] \in R^{n \times 2L}$ are converted to fixed-length vector $h_{drnn} \in R^{2L}$ with max pooling over time:
Let's say $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Let's $z_2 \in R^{2L + 24}$ be another hidden state obtained as
where $\oplus $ denotes concatenation. The vector $z_2$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i2}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{drnn}$ is the weight matrix, and $b_{drnn}$ is the bias.
Methodology ::: Pooled BiLSTM
The architecture has been shown in Figure FIGREF44. Given a sequence $s_i = [x_{1}, x_{2}, x_{3}, ..... x_{j}]$, where $x_j \in R^d$ is the d-dimensional word vector for word $w_j$, the hidden state obtained after BiLSTM is given as
To avoid the loss of information because of modelling the entire sequence, we have concatenated the max-pooled($c_{max}$) and mean-pooled($c_{mean}$) representation of hidden states calculated over all time steps BIBREF25. We have also concatenated the nlp features, $f_{nlp} \in R^{24}$ the final feature vector $z_{3}$ is given as
where $\oplus $ denotes concatenation. The final feature $z_3$ vector is fed to the fully connected layer with softmax activation. Let $y_{i3}^*$ be the softmax probablities, specifically for class label $k$ given as:
where $K$ is the number of classes and $W_{bilstm}$ and $b_{bilstm}$ are the weight matrix and bias respectively.
Methodology ::: Classification Model
According to deep learning literature BIBREF46, BIBREF47, BIBREF48, unweighted averaging might be a reasonable ensemble for similar base learners of comparable performance. Now, similar to the information discussed in BIBREF21, we can compute the model averaging (unweighted) by combining the softmax probabilities of three different classification models obtained from equations DISPLAY_FORM32, DISPLAY_FORM43, DISPLAY_FORM48. The averaged class probabilities are computed as:
where K is the number of classes, and $\hat{y_i}$ is the predicted label for sentence $s_i$.
Experiment and Evaluation ::: Dataset Description
We have used two datasets in our experimental evaluations: (1) TRAC 2018 Dataset and (2) Kaggle Dataset.
TRAC 2018 Dataset: We have used the English code-mixed dataset provided by TRAC 2018. This dataset contains three labels, (a) Non-Aggressive(NAG), (b) Overtly-Aggressive (OAG) and (c) Covertly-Aggressive(CAG). The distribution of training, validation and test sets are described in Table TABREF56.
Kaggle Dataset: This dataset contains 20001 tweets which are manually labeled. The labels are divided into two categories (indicating presence or absence of aggression in tweets) AGG(Aggressive) or NAG(Non-Aggressive). We have used the same test split available in the baseline code. The distribution for each of the training and test is given in Table TABREF56.
Experiment and Evaluation ::: Experimental Setup
We have used Glove EmbeddingsBIBREF49 concatenated with FastText EmbeddingsBIBREF20 in all the three classification models presented in this paper. Specifically, we used Glove pre-trained vectors obtained from Twitter corpus containing 27 billion tokens and 1.2 million vocabulary entries where each word is represented using 100-dimensional vector. In the case of FastText the word is represented using 300-dimensional vector. Also, we have applied spatial dropoutBIBREF50 of 0.3 at embedding layer for DPCNN(in section SECREF30) and Pooled BiLSTM(in section SECREF45). For DPCNN model(in SECREF30) we have learnt 128-dimensional vector representation for unsupervised embeddings implicitly for task specific representation as in BIBREF23. Additionally, for DPCNN all the convolutional layers used 128 filters, kernel size of 3 and max-pooling stride 2. Additionally, in the case of DPCNN we have used kernel and bias regularizer of value 0.00001 for all convolutional kernels. The pre-activation function used in DPCNN is Parametric ReLU (PReLU) proposed in BIBREF44 while the activation at each of the convolutional kernel is linear. For, DRNN(in section SECREF34) we have used the window size of 8 and rest of the parameters related to LSTM units are same as given inBIBREF24. For, Pooled BiLSTM(in section SECREF45) we have used LSTM hidden units size as 256. The maximum sequence length is 200 in all three models. In each of the classification model the classification layer contains the fully connected layer with softmax activation with output size of 3 equal to number of classes in case of TRAC 2018 dataset and its 2 in case of Kaggle dataset. Training has been done using ADAM optimizerBIBREF51 for DPCNN and RMSPROPBIBREF52 for DRNN and Pooled Bi-LSTM models. All the models are trained end-to-end using softmax cross entropy lossBIBREF53 for TRAC 2018 dataset and binary cross entropy lossBIBREF53 for Kaggle dataset.
To train our model for TRAC 2018 dataset, we merged the training and validation dataset and then used 10% split from shuffled dataset to save the best model, for all classifiers. We have used only 20 NLP features (except TF-IDF Emoticon feature and Punctuation feature as given in Table TABREF25) for Kaggle dataset (as these are not present in the Kaggle dataset).
Experiment and Evaluation ::: Evaluation Strategy
To compare our experimental results we have used top-5 systems from the published results of TRAC-2018BIBREF5. To compare our results on Kaggle dataset, we have used the last & the best published result on Kaggle website as a baseline. We have conducted the separate experiments, to properly investigate the performance of (a) each of the classifiers (used in our model averaging based system), (b) impact of the NLP features on each of these classifiers and finally, (c) the performance of our proposed system. In Table TABREF57, TABREF57 and TABREF57, models, named as DPCNN(ref SECREF30), DRNN (ref SECREF34) and Pooled BiLSTM(ref SECREF45) are corresponding models without NLP features. Similarly, DPCNN+NLP Features, DRNN + NLP Features and Pooled BiLSTM + NLP Features are corresponding models with NLP features. The Model Averaging (A+B+C) is the ensemble of three models (i.e., model averaging of DPCNN, DRNN and Pooled BiLSTM) without NLP features. Finally, Our Proposed Method, which represents the model averaging of three models with NLP features.
Experiment and Evaluation ::: Results and Discussion
In this paper, we have evaluated our model using weighted macro-averaged F-score. The measure is defined as in (See BIBREF5, BIBREF2). It weights the F-score computed per class based on the class composition in the test set and then takes the average of these per-class F-score gives the final F-score. Table TABREF57, TABREF57 and TABREF57. presents the comparative experimental results for the proposed method in this paper with respect to the state-of-the-art. The top 5 modelsBIBREF5 given in Table TABREF57 and TABREF57. are the best performing models for Facebook and Twitter test dataset respectively on TRAC 2018. We have followed all the experimental guidelines as discussed in TRAC contest guideline paperBIBREF2, BIBREF5. From the results given in Table TABREF57, TABREF57 and TABREF57 it is clear that our proposed model shows the best performance among all of the approaches. These results also state that all the deep learning architectures with NLP features, perform better than individual corresponding deep learning architectures. This means NLP features, adds some value to the architectures, even if it is not very high.
Conclusion and Future Work
In this paper, we have briefly described the approach we have taken to solve the aggressive identification on online social media texts which is very challenging since the dataset is noisy and code-mixed. We presented an ensemble of deep learning models which outperform previous approaches by sufficient margin while having the ability to generalize across domains.
In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources). | Hindi |
e829f008d62312357e0354a9ed3b0827c91c9401 | e829f008d62312357e0354a9ed3b0827c91c9401_0 | Q: Which psycholinguistic and basic linguistic features are used?
Text: Introduction
The exponential increase of interactions on the various social media platforms has generated the huge amount of data on social media platforms like Facebook and Twitter, etc. These interactions resulted not only positive effect but also negative effect over billions of people owing to the fact that there are lots of aggressive comments (like hate, anger, and bullying). These cause not only mental and psychological stress but also account deactivation and even suicideBIBREF1. In this paper we concentrate on problems related to aggressiveness.
The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:
Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, "Well said sonu..you have courage to stand against dadagiri of Muslims".
Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, "Dear India, stop playing with the emotions of your people for votes."
Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive.
The additional discussion on aggressiveness task can be found in Kaggle task , which just divided the task into two classes - i.e., presence or absence of aggression in tweets.
The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset.
The massive increase of the social media data rendered the manual methods of content moderation difficult and costly. Machine Learning and Deep Learning methods to identify such phenomena have attracted more attention to the research community in recent yearsBIBREF4.
Based on the current context, we can divide the problem into three sub-problems: (a) detection of aggression levels, (b) handling code-mixed data and (c) handling styles (due to differences in social media platforms and text entry rules/restrictions).
A lot of the previous approachesBIBREF5 have used an ensemble model for the task. For example, some of them uses ensemble of statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9 some used ensemble of statistical and deep learning modelsBIBREF10, BIBREF11, BIBREF12 some used ensemble of deep learning models BIBREF13. There are approaches which proposed unified architecture based on deep learningBIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 while some proposed unified statistical modelBIBREF7. Additionally, there are some approaches uses data augmentation either through translation or labeling external data to make the model generalize across domainsBIBREF14, BIBREF10, BIBREF7.
Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:
Deep-Text Learning. The goal is to learn long range associations, dependencies between regions of text, N-grams, key-patterns, topical information, and sequential dependencies.
Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term "NLP Features" to represent it in the entire paper.
Dual embedding based on FastText and Glove. This dual embedding helps in high vocabulary coverage and to capture the rare and partially incorrect words in the text (specially by FastText BIBREF20).
Our "Deep-text architecture" uses model averaging strategy with three different deep learning architectures. Model averaging belongs to the family of ensemble learning techniques that uses multiple models for the same problem and combines their predictions to produce a more reliable and consistent prediction accuracy BIBREF21. This is the simplest form of weighted average ensemble based predictionBIBREF22 where, each ensemble member contribute equally to predictions. Specifically in our case, three different models have been used. The following contains the intuition behind the selection of these three models:
Deep Pyramid CNN BIBREF23 being deeper helps to learn long range associations between temporal regions of text using two-view embeddings.
Disconnected RNN BIBREF24 is very helpful in encoding the sequential information with temporal key patterns in the text.
Pooled BiLSTM In this architecture the last hidden state of BiLSTM is concatenated with mean and max-pooled representation of the hidden states obtained over all the time steps of Bi-LSTM. The idea of using mean and max pooling layers together is taken from BIBREF25 to avoid the loss of information in longer sequences of texts and max-pooling is taken to capture the topical informationBIBREF26.
NLP Features In each of the individual models, the NLP features are concatenated with last hidden state before the softmax classification layer as meta-data. The main aim is to provide additional information to the deep learning network.
The intuition behind the NLP features are the following:
Emotion Sensor Dataset We have introduced to use of emotion sensor features, as a meta-data information. We have obtained the word sensor dataset from Kaggle. In this dataset each word is statistically classified into 7 distinct classes (Disgust, Surprise, Neutral, Anger, Sad, Happy and Fear) using Naive Bayes, based on sentences collected from twitter and blogs.
Controlled Topical Signals from Empath. Empath can analyse the text across 200 gold standard topics and emotions. Additionally, it uses neural embedding to draw connotation among words across more than 1.8 billion words. We have used only selected categories like violence, hate, anger, aggression, social media and dispute from 200 Empath categories useful for us unlikeBIBREF12 which takes 194 categories.
Emoticons frequently used on social media indicates the sense of sentenceBIBREF17, BIBREF19, BIBREF9.
Normalized frequency of POS tags According to BIBREF12, BIBREF11, BIBREF7, BIBREF15 POS Tags provide the degree of target aggressiveness. LikeBIBREF12, we have used only four tags (a) adjective (JJ, JJR, JJS), (b) adverb (RB, RBR, RBS), (c) verb (VB, VBD, VBG, VBN, VBP, VBZ) and (d) noun (NN, NNS, NNP, NNPS) (See Penn-Treebank POS Tags for abbreviations and the full list). The main reason behind the selection of these four tags is to just identify words related to persons, activities, quality, etc, in the text.
Sentiment polarity obtained from VADER Sentiment Analysis BIBREF27 (positive, negative and neutral) like used in BIBREF15, BIBREF10, BIBREF11, BIBREF7. It helps to demarcate aggressiveness with non-aggressiveness in the text.
The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper.
Related work
There are several works for aggression identification submitted at TRAC 2018 among them some approaches use the ensemble of multiple statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9. Similarly, some of the models likeBIBREF10, BIBREF11, BIBREF12 have used ensemble of statistical and deep learning models. In these models the statistical part of the model uses additional features from text analysis like parts-of-speech tags, punctuation, emotion, emoticon etc. Model like: BIBREF13 has used the ensemble of deep learning models based on majority voting.
Some other models like: BIBREF28, BIBREF12, BIBREF9 have used different models for Facebook and twitter. While approaches like:BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 have proposed unified architecture based on deep learning. Systems likeBIBREF14, BIBREF10, BIBREF7 have used data augmentation either through translation or labelling external data to make the model generalize across domains. While BIBREF7 has proposed a unified statistical model.
Among approaches likeBIBREF6 extracted features from TF-IDF of character n-grams whileBIBREF28 uses LSTM with pre-trained embeddings from FastText. BIBREF15 have used the BiLSTM based model and the SVM metaclassifier model for the Facebook and Twitter test sets, respectively. While BIBREF13 tried ensembling of CNN, LSTM, and BILSTM.
Some approaches like:BIBREF12 has used emotions frequency as one of the features, while some others use sentiment emotion as featureBIBREF11. Also,BIBREF17, BIBREF19 have converted emoticons to their description. BIBREF9 have used TF-IDF of emoticons per-class as one of the features. Compared to all these approaches, we have concentrated to capture multiple linguistic/pattern based relations, key-terms and key-patters (with their association in text) through a combination of deep learning architectures with model averaging. We have also used NLP features as additional features with our deep learning architecture, obtained from psycho-linguistic and basic linguistic features.
Methodology
In this section, we describe our system architecture for aggressiveness classifier. In section SECREF23 we describe data preprocessing applied on the input text before feeding it to each of the classification models. Section SECREF26 describes the computation of NLP features. In Sections SECREF30, SECREF34 and SECREF45 we have described the architecture of different deep learning models like Deep Pyramid CNN, Disconnected RNN and Pooled BiLSTM respectively. Finally, in Section SECREF49, we describe model averaging based classification model which combines the prediction probabilities from three deep learninig architectures discussed above. (see Figure FIGREF22. for block diagram of system architecture).
Methodology ::: Data Preprocessing
We consider the text to be well formatted before applying the text to the embedding layer. First, we detect non-English text(which are few) and translate all of them to English using Google Translate. Still, there is some code mixed words like "mc", "bc" and other English abbreviations and spelling errors like "nd" in place of "and", "u" in place of "you" causes deep learning model to confuse with sentences of the same meaning. We follow the strategy of preprocessor as inBIBREF17 to normalize the abbreviations and remove spelling errors, URLs and punctuation marks, converting emojis to their description.
https://spacy.io/usage/linguistic-features#pos-tagging
Methodology ::: NLP Features
We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).
Different from previous approachesBIBREF8, BIBREF12 where BIBREF12 have used Emotion features in the form of frequency while BIBREF8 have used emotion feature vector obtained from LIWC 2007BIBREF30. UnlikeBIBREF12 we have used only 6 topical signals from EmapthBIBREF29. We have borrowed the idea of using other features like punctuation features and parts-of-speech tags from BIBREF12. The Table 1. lists and describes features, tools used to obtain them and the number of features resulted from each type.
Methodology ::: Deep Pyramid CNN(DPCNN)
Since it has been proved that CNNs are great feature extractors for text classificationBIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF23 while deeper networks(whether RNNs or CNN's) has been proven for learning long-range association like deeper character level CNN'sBIBREF36, BIBREF37, and complex combination of RNN and CNNBIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42. Deep Pyramid CNN (DPCNN)BIBREF23 has 15 layers of word-level CNN's and contains similar pre-activation as proposed in improved ResnetBIBREF43. DPCNN outperforms the 32-layer character CNNBIBREF37 and Hierarchical attention networksBIBREF42 it has added advantage that due to its pyramid structure it does not require dimension matching in shortcut connections defined as z + h(z) as inBIBREF43 where h(z) represents the skipped layers essentially contains two convolutional layers with pre-activation. It uses enhanced region embedding which consumes pre-trained embeddings (in our case it is FastText+Glove based dual embedding).
Enhanced Region Embedding. The current DPCNNBIBREF23, uses two view type enhanced region embedding. For the text categorization, it defines a region of text as view-1 and its adjacent regions as view-2. Then using unlabeled data, it trains a neural network of one hidden layer with an artificial task of predicting view-2 from view-1. The obtained hidden layer, which is an embedding function that takes view-1 as input, serves as an unsupervised embedding function in the model for text categorization. The detailed architecture has been shown in Figure FIGREF29.
Let each word input $x_j \in R^d$ be the d-dimensional vector for the $j^{th}$ word $w_{j}$ and the sentence $s_i$ contains sequence of $n$ words $\lbrace w_{1},w_{2},w_{3},......,w_{n}\rbrace $ as shown in Figure FIGREF29. In comparision to conventional convolution layer, DPCNN proposes to use pre-activation, thus essentially the convolutional layer of DPCNN is $\textbf {W}\sigma (\textbf {x})+\textbf {b}$, where $\textbf {W}$ and $\textbf {b}$(unique to each layer) are the weights matrix and bias respectively, we use $\sigma $ as PReLUBIBREF44. During implementation we use kernel size of 3(represented by $\textbf {x}$ to denote the small overlapping regions of text.), The number of filters(number of feature maps denoted by the number of rows of $\textbf {W}$) is 128 as depicted in Figure FIGREF29. With the number of filters same in each convolution layer and max-pooling with stride 2 makes the computation time halved, and doubles the net coverage of convolution kernel. Thus the deeper layers cause to learn long-range associations between regions of text. Let's say $h_{dpcnn} \in R^{p_1}$ be the hidden state obtained from DPCNN just before the classification layer and $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Lets $z_1 \in R^{p_1 + 24}$ be another hidden state obtained as
where, $\oplus $ denotes concatenation. The vector $z_1$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i1}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{dpcnn}$ and $b_{dpcnn}$ are the weight matrix and bias respectively.
Methodology ::: Disconnected RNN(DRNN)
Given a sequence $s_i = [x_{1}, x_{2}, x_{3},....x_{n}]$ where $x_{j} \in R^d$ represents the d-dimensional word vector for word $w_{j}$ and $n$ is the length of input text applied to a variant of RNN called Long Short-Term Memory (LSTM)BIBREF45 as shown in Figure FIGREF33. It is widely used for sequential modelling with long-term dependencies. For sequence modelling it keeps on updating the memory cell with current input using an adaptive gating mechanism. At time step $t$ the memory $c_t$ and the hidden state $h_t$ are updated as follows:
where $\hat{c}_t$ is the current cell state obtained from current input $x_t$ and previous hidden state $h_{t-1}$, $i_t$, $f_t$ and $o_t$ are the activation corresponding to input gate, forget gate and output gate respectively, $\sigma $ denotes the logistic sigmoid function and $\odot $ denotes the element-wise multiplication. Hence the hidden state representation at time step $t$ depends on all the previous input vectors given as
Specifically we have used Bi-directional LSTM BIBREF45 to capture both past and future context. It provides $h_t$ from both directions(forward & backward). The forward LSTM takes the natural order of words from $x_{1}$ to $x_{n}$ to obtain $\overrightarrow{h_t}$, while backward-LSTM $x_{n}$ to $x_{1}$ to obtain $\overleftarrow{h_t}$. then $h_t$ is calculated as
where $\oplus $ is the concatenation and $L$ is the size for one-directional LSTM. Therefore we denote the hidden state in equation DISPLAY_FORM37 with BiLSTM as
To avoid handling of long sequence and to capture local information for each word we define the window size $k$ for each word such that the BiLSTM only sees the the previous $k-1$ words with the current word, where $k$ is a hyperparameterBIBREF24. We use padding <PAD> to make the slices of fixed size k(as shown in Figure FIGREF33). It provides each hidden state $h_t$ with sequence of $k$ previous words. Since the phrase of $k$ words can lie anywhere in the text it helps to model the position invariant phrase representation due to which the it identifies key phrases important for identifying particular category. In this case, the equation of $h_t$ is given as
The output hidden vectors, $H = [h_1, h_2, h_3, ...... h_n] \in R^{n \times 2L}$ are converted to fixed-length vector $h_{drnn} \in R^{2L}$ with max pooling over time:
Let's say $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Let's $z_2 \in R^{2L + 24}$ be another hidden state obtained as
where $\oplus $ denotes concatenation. The vector $z_2$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i2}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{drnn}$ is the weight matrix, and $b_{drnn}$ is the bias.
Methodology ::: Pooled BiLSTM
The architecture has been shown in Figure FIGREF44. Given a sequence $s_i = [x_{1}, x_{2}, x_{3}, ..... x_{j}]$, where $x_j \in R^d$ is the d-dimensional word vector for word $w_j$, the hidden state obtained after BiLSTM is given as
To avoid the loss of information because of modelling the entire sequence, we have concatenated the max-pooled($c_{max}$) and mean-pooled($c_{mean}$) representation of hidden states calculated over all time steps BIBREF25. We have also concatenated the nlp features, $f_{nlp} \in R^{24}$ the final feature vector $z_{3}$ is given as
where $\oplus $ denotes concatenation. The final feature $z_3$ vector is fed to the fully connected layer with softmax activation. Let $y_{i3}^*$ be the softmax probablities, specifically for class label $k$ given as:
where $K$ is the number of classes and $W_{bilstm}$ and $b_{bilstm}$ are the weight matrix and bias respectively.
Methodology ::: Classification Model
According to deep learning literature BIBREF46, BIBREF47, BIBREF48, unweighted averaging might be a reasonable ensemble for similar base learners of comparable performance. Now, similar to the information discussed in BIBREF21, we can compute the model averaging (unweighted) by combining the softmax probabilities of three different classification models obtained from equations DISPLAY_FORM32, DISPLAY_FORM43, DISPLAY_FORM48. The averaged class probabilities are computed as:
where K is the number of classes, and $\hat{y_i}$ is the predicted label for sentence $s_i$.
Experiment and Evaluation ::: Dataset Description
We have used two datasets in our experimental evaluations: (1) TRAC 2018 Dataset and (2) Kaggle Dataset.
TRAC 2018 Dataset: We have used the English code-mixed dataset provided by TRAC 2018. This dataset contains three labels, (a) Non-Aggressive(NAG), (b) Overtly-Aggressive (OAG) and (c) Covertly-Aggressive(CAG). The distribution of training, validation and test sets are described in Table TABREF56.
Kaggle Dataset: This dataset contains 20001 tweets which are manually labeled. The labels are divided into two categories (indicating presence or absence of aggression in tweets) AGG(Aggressive) or NAG(Non-Aggressive). We have used the same test split available in the baseline code. The distribution for each of the training and test is given in Table TABREF56.
Experiment and Evaluation ::: Experimental Setup
We have used Glove EmbeddingsBIBREF49 concatenated with FastText EmbeddingsBIBREF20 in all the three classification models presented in this paper. Specifically, we used Glove pre-trained vectors obtained from Twitter corpus containing 27 billion tokens and 1.2 million vocabulary entries where each word is represented using 100-dimensional vector. In the case of FastText the word is represented using 300-dimensional vector. Also, we have applied spatial dropoutBIBREF50 of 0.3 at embedding layer for DPCNN(in section SECREF30) and Pooled BiLSTM(in section SECREF45). For DPCNN model(in SECREF30) we have learnt 128-dimensional vector representation for unsupervised embeddings implicitly for task specific representation as in BIBREF23. Additionally, for DPCNN all the convolutional layers used 128 filters, kernel size of 3 and max-pooling stride 2. Additionally, in the case of DPCNN we have used kernel and bias regularizer of value 0.00001 for all convolutional kernels. The pre-activation function used in DPCNN is Parametric ReLU (PReLU) proposed in BIBREF44 while the activation at each of the convolutional kernel is linear. For, DRNN(in section SECREF34) we have used the window size of 8 and rest of the parameters related to LSTM units are same as given inBIBREF24. For, Pooled BiLSTM(in section SECREF45) we have used LSTM hidden units size as 256. The maximum sequence length is 200 in all three models. In each of the classification model the classification layer contains the fully connected layer with softmax activation with output size of 3 equal to number of classes in case of TRAC 2018 dataset and its 2 in case of Kaggle dataset. Training has been done using ADAM optimizerBIBREF51 for DPCNN and RMSPROPBIBREF52 for DRNN and Pooled Bi-LSTM models. All the models are trained end-to-end using softmax cross entropy lossBIBREF53 for TRAC 2018 dataset and binary cross entropy lossBIBREF53 for Kaggle dataset.
To train our model for TRAC 2018 dataset, we merged the training and validation dataset and then used 10% split from shuffled dataset to save the best model, for all classifiers. We have used only 20 NLP features (except TF-IDF Emoticon feature and Punctuation feature as given in Table TABREF25) for Kaggle dataset (as these are not present in the Kaggle dataset).
Experiment and Evaluation ::: Evaluation Strategy
To compare our experimental results we have used top-5 systems from the published results of TRAC-2018BIBREF5. To compare our results on Kaggle dataset, we have used the last & the best published result on Kaggle website as a baseline. We have conducted the separate experiments, to properly investigate the performance of (a) each of the classifiers (used in our model averaging based system), (b) impact of the NLP features on each of these classifiers and finally, (c) the performance of our proposed system. In Table TABREF57, TABREF57 and TABREF57, models, named as DPCNN(ref SECREF30), DRNN (ref SECREF34) and Pooled BiLSTM(ref SECREF45) are corresponding models without NLP features. Similarly, DPCNN+NLP Features, DRNN + NLP Features and Pooled BiLSTM + NLP Features are corresponding models with NLP features. The Model Averaging (A+B+C) is the ensemble of three models (i.e., model averaging of DPCNN, DRNN and Pooled BiLSTM) without NLP features. Finally, Our Proposed Method, which represents the model averaging of three models with NLP features.
Experiment and Evaluation ::: Results and Discussion
In this paper, we have evaluated our model using weighted macro-averaged F-score. The measure is defined as in (See BIBREF5, BIBREF2). It weights the F-score computed per class based on the class composition in the test set and then takes the average of these per-class F-score gives the final F-score. Table TABREF57, TABREF57 and TABREF57. presents the comparative experimental results for the proposed method in this paper with respect to the state-of-the-art. The top 5 modelsBIBREF5 given in Table TABREF57 and TABREF57. are the best performing models for Facebook and Twitter test dataset respectively on TRAC 2018. We have followed all the experimental guidelines as discussed in TRAC contest guideline paperBIBREF2, BIBREF5. From the results given in Table TABREF57, TABREF57 and TABREF57 it is clear that our proposed model shows the best performance among all of the approaches. These results also state that all the deep learning architectures with NLP features, perform better than individual corresponding deep learning architectures. This means NLP features, adds some value to the architectures, even if it is not very high.
Conclusion and Future Work
In this paper, we have briefly described the approach we have taken to solve the aggressive identification on online social media texts which is very challenging since the dataset is noisy and code-mixed. We presented an ensemble of deep learning models which outperform previous approaches by sufficient margin while having the ability to generalize across domains.
In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources). | Emotion Sensor Feature, Part of Speech, Punctuation, Sentiment Analysis, Empath, TF-IDF Emoticon features |
54fe8f05595f2d1d4a4fd77f4562eac519711fa6 | 54fe8f05595f2d1d4a4fd77f4562eac519711fa6_0 | Q: How have the differences in communication styles between Twitter and Facebook increase the complexity of the problem?
Text: Introduction
The exponential increase of interactions on the various social media platforms has generated the huge amount of data on social media platforms like Facebook and Twitter, etc. These interactions resulted not only positive effect but also negative effect over billions of people owing to the fact that there are lots of aggressive comments (like hate, anger, and bullying). These cause not only mental and psychological stress but also account deactivation and even suicideBIBREF1. In this paper we concentrate on problems related to aggressiveness.
The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:
Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, "Well said sonu..you have courage to stand against dadagiri of Muslims".
Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, "Dear India, stop playing with the emotions of your people for votes."
Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive.
The additional discussion on aggressiveness task can be found in Kaggle task , which just divided the task into two classes - i.e., presence or absence of aggression in tweets.
The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset.
The massive increase of the social media data rendered the manual methods of content moderation difficult and costly. Machine Learning and Deep Learning methods to identify such phenomena have attracted more attention to the research community in recent yearsBIBREF4.
Based on the current context, we can divide the problem into three sub-problems: (a) detection of aggression levels, (b) handling code-mixed data and (c) handling styles (due to differences in social media platforms and text entry rules/restrictions).
A lot of the previous approachesBIBREF5 have used an ensemble model for the task. For example, some of them uses ensemble of statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9 some used ensemble of statistical and deep learning modelsBIBREF10, BIBREF11, BIBREF12 some used ensemble of deep learning models BIBREF13. There are approaches which proposed unified architecture based on deep learningBIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 while some proposed unified statistical modelBIBREF7. Additionally, there are some approaches uses data augmentation either through translation or labeling external data to make the model generalize across domainsBIBREF14, BIBREF10, BIBREF7.
Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:
Deep-Text Learning. The goal is to learn long range associations, dependencies between regions of text, N-grams, key-patterns, topical information, and sequential dependencies.
Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term "NLP Features" to represent it in the entire paper.
Dual embedding based on FastText and Glove. This dual embedding helps in high vocabulary coverage and to capture the rare and partially incorrect words in the text (specially by FastText BIBREF20).
Our "Deep-text architecture" uses model averaging strategy with three different deep learning architectures. Model averaging belongs to the family of ensemble learning techniques that uses multiple models for the same problem and combines their predictions to produce a more reliable and consistent prediction accuracy BIBREF21. This is the simplest form of weighted average ensemble based predictionBIBREF22 where, each ensemble member contribute equally to predictions. Specifically in our case, three different models have been used. The following contains the intuition behind the selection of these three models:
Deep Pyramid CNN BIBREF23 being deeper helps to learn long range associations between temporal regions of text using two-view embeddings.
Disconnected RNN BIBREF24 is very helpful in encoding the sequential information with temporal key patterns in the text.
Pooled BiLSTM In this architecture the last hidden state of BiLSTM is concatenated with mean and max-pooled representation of the hidden states obtained over all the time steps of Bi-LSTM. The idea of using mean and max pooling layers together is taken from BIBREF25 to avoid the loss of information in longer sequences of texts and max-pooling is taken to capture the topical informationBIBREF26.
NLP Features In each of the individual models, the NLP features are concatenated with last hidden state before the softmax classification layer as meta-data. The main aim is to provide additional information to the deep learning network.
The intuition behind the NLP features are the following:
Emotion Sensor Dataset We have introduced to use of emotion sensor features, as a meta-data information. We have obtained the word sensor dataset from Kaggle. In this dataset each word is statistically classified into 7 distinct classes (Disgust, Surprise, Neutral, Anger, Sad, Happy and Fear) using Naive Bayes, based on sentences collected from twitter and blogs.
Controlled Topical Signals from Empath. Empath can analyse the text across 200 gold standard topics and emotions. Additionally, it uses neural embedding to draw connotation among words across more than 1.8 billion words. We have used only selected categories like violence, hate, anger, aggression, social media and dispute from 200 Empath categories useful for us unlikeBIBREF12 which takes 194 categories.
Emoticons frequently used on social media indicates the sense of sentenceBIBREF17, BIBREF19, BIBREF9.
Normalized frequency of POS tags According to BIBREF12, BIBREF11, BIBREF7, BIBREF15 POS Tags provide the degree of target aggressiveness. LikeBIBREF12, we have used only four tags (a) adjective (JJ, JJR, JJS), (b) adverb (RB, RBR, RBS), (c) verb (VB, VBD, VBG, VBN, VBP, VBZ) and (d) noun (NN, NNS, NNP, NNPS) (See Penn-Treebank POS Tags for abbreviations and the full list). The main reason behind the selection of these four tags is to just identify words related to persons, activities, quality, etc, in the text.
Sentiment polarity obtained from VADER Sentiment Analysis BIBREF27 (positive, negative and neutral) like used in BIBREF15, BIBREF10, BIBREF11, BIBREF7. It helps to demarcate aggressiveness with non-aggressiveness in the text.
The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper.
Related work
There are several works for aggression identification submitted at TRAC 2018 among them some approaches use the ensemble of multiple statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9. Similarly, some of the models likeBIBREF10, BIBREF11, BIBREF12 have used ensemble of statistical and deep learning models. In these models the statistical part of the model uses additional features from text analysis like parts-of-speech tags, punctuation, emotion, emoticon etc. Model like: BIBREF13 has used the ensemble of deep learning models based on majority voting.
Some other models like: BIBREF28, BIBREF12, BIBREF9 have used different models for Facebook and twitter. While approaches like:BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 have proposed unified architecture based on deep learning. Systems likeBIBREF14, BIBREF10, BIBREF7 have used data augmentation either through translation or labelling external data to make the model generalize across domains. While BIBREF7 has proposed a unified statistical model.
Among approaches likeBIBREF6 extracted features from TF-IDF of character n-grams whileBIBREF28 uses LSTM with pre-trained embeddings from FastText. BIBREF15 have used the BiLSTM based model and the SVM metaclassifier model for the Facebook and Twitter test sets, respectively. While BIBREF13 tried ensembling of CNN, LSTM, and BILSTM.
Some approaches like:BIBREF12 has used emotions frequency as one of the features, while some others use sentiment emotion as featureBIBREF11. Also,BIBREF17, BIBREF19 have converted emoticons to their description. BIBREF9 have used TF-IDF of emoticons per-class as one of the features. Compared to all these approaches, we have concentrated to capture multiple linguistic/pattern based relations, key-terms and key-patters (with their association in text) through a combination of deep learning architectures with model averaging. We have also used NLP features as additional features with our deep learning architecture, obtained from psycho-linguistic and basic linguistic features.
Methodology
In this section, we describe our system architecture for aggressiveness classifier. In section SECREF23 we describe data preprocessing applied on the input text before feeding it to each of the classification models. Section SECREF26 describes the computation of NLP features. In Sections SECREF30, SECREF34 and SECREF45 we have described the architecture of different deep learning models like Deep Pyramid CNN, Disconnected RNN and Pooled BiLSTM respectively. Finally, in Section SECREF49, we describe model averaging based classification model which combines the prediction probabilities from three deep learninig architectures discussed above. (see Figure FIGREF22. for block diagram of system architecture).
Methodology ::: Data Preprocessing
We consider the text to be well formatted before applying the text to the embedding layer. First, we detect non-English text(which are few) and translate all of them to English using Google Translate. Still, there is some code mixed words like "mc", "bc" and other English abbreviations and spelling errors like "nd" in place of "and", "u" in place of "you" causes deep learning model to confuse with sentences of the same meaning. We follow the strategy of preprocessor as inBIBREF17 to normalize the abbreviations and remove spelling errors, URLs and punctuation marks, converting emojis to their description.
https://spacy.io/usage/linguistic-features#pos-tagging
Methodology ::: NLP Features
We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).
Different from previous approachesBIBREF8, BIBREF12 where BIBREF12 have used Emotion features in the form of frequency while BIBREF8 have used emotion feature vector obtained from LIWC 2007BIBREF30. UnlikeBIBREF12 we have used only 6 topical signals from EmapthBIBREF29. We have borrowed the idea of using other features like punctuation features and parts-of-speech tags from BIBREF12. The Table 1. lists and describes features, tools used to obtain them and the number of features resulted from each type.
Methodology ::: Deep Pyramid CNN(DPCNN)
Since it has been proved that CNNs are great feature extractors for text classificationBIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF23 while deeper networks(whether RNNs or CNN's) has been proven for learning long-range association like deeper character level CNN'sBIBREF36, BIBREF37, and complex combination of RNN and CNNBIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42. Deep Pyramid CNN (DPCNN)BIBREF23 has 15 layers of word-level CNN's and contains similar pre-activation as proposed in improved ResnetBIBREF43. DPCNN outperforms the 32-layer character CNNBIBREF37 and Hierarchical attention networksBIBREF42 it has added advantage that due to its pyramid structure it does not require dimension matching in shortcut connections defined as z + h(z) as inBIBREF43 where h(z) represents the skipped layers essentially contains two convolutional layers with pre-activation. It uses enhanced region embedding which consumes pre-trained embeddings (in our case it is FastText+Glove based dual embedding).
Enhanced Region Embedding. The current DPCNNBIBREF23, uses two view type enhanced region embedding. For the text categorization, it defines a region of text as view-1 and its adjacent regions as view-2. Then using unlabeled data, it trains a neural network of one hidden layer with an artificial task of predicting view-2 from view-1. The obtained hidden layer, which is an embedding function that takes view-1 as input, serves as an unsupervised embedding function in the model for text categorization. The detailed architecture has been shown in Figure FIGREF29.
Let each word input $x_j \in R^d$ be the d-dimensional vector for the $j^{th}$ word $w_{j}$ and the sentence $s_i$ contains sequence of $n$ words $\lbrace w_{1},w_{2},w_{3},......,w_{n}\rbrace $ as shown in Figure FIGREF29. In comparision to conventional convolution layer, DPCNN proposes to use pre-activation, thus essentially the convolutional layer of DPCNN is $\textbf {W}\sigma (\textbf {x})+\textbf {b}$, where $\textbf {W}$ and $\textbf {b}$(unique to each layer) are the weights matrix and bias respectively, we use $\sigma $ as PReLUBIBREF44. During implementation we use kernel size of 3(represented by $\textbf {x}$ to denote the small overlapping regions of text.), The number of filters(number of feature maps denoted by the number of rows of $\textbf {W}$) is 128 as depicted in Figure FIGREF29. With the number of filters same in each convolution layer and max-pooling with stride 2 makes the computation time halved, and doubles the net coverage of convolution kernel. Thus the deeper layers cause to learn long-range associations between regions of text. Let's say $h_{dpcnn} \in R^{p_1}$ be the hidden state obtained from DPCNN just before the classification layer and $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Lets $z_1 \in R^{p_1 + 24}$ be another hidden state obtained as
where, $\oplus $ denotes concatenation. The vector $z_1$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i1}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{dpcnn}$ and $b_{dpcnn}$ are the weight matrix and bias respectively.
Methodology ::: Disconnected RNN(DRNN)
Given a sequence $s_i = [x_{1}, x_{2}, x_{3},....x_{n}]$ where $x_{j} \in R^d$ represents the d-dimensional word vector for word $w_{j}$ and $n$ is the length of input text applied to a variant of RNN called Long Short-Term Memory (LSTM)BIBREF45 as shown in Figure FIGREF33. It is widely used for sequential modelling with long-term dependencies. For sequence modelling it keeps on updating the memory cell with current input using an adaptive gating mechanism. At time step $t$ the memory $c_t$ and the hidden state $h_t$ are updated as follows:
where $\hat{c}_t$ is the current cell state obtained from current input $x_t$ and previous hidden state $h_{t-1}$, $i_t$, $f_t$ and $o_t$ are the activation corresponding to input gate, forget gate and output gate respectively, $\sigma $ denotes the logistic sigmoid function and $\odot $ denotes the element-wise multiplication. Hence the hidden state representation at time step $t$ depends on all the previous input vectors given as
Specifically we have used Bi-directional LSTM BIBREF45 to capture both past and future context. It provides $h_t$ from both directions(forward & backward). The forward LSTM takes the natural order of words from $x_{1}$ to $x_{n}$ to obtain $\overrightarrow{h_t}$, while backward-LSTM $x_{n}$ to $x_{1}$ to obtain $\overleftarrow{h_t}$. then $h_t$ is calculated as
where $\oplus $ is the concatenation and $L$ is the size for one-directional LSTM. Therefore we denote the hidden state in equation DISPLAY_FORM37 with BiLSTM as
To avoid handling of long sequence and to capture local information for each word we define the window size $k$ for each word such that the BiLSTM only sees the the previous $k-1$ words with the current word, where $k$ is a hyperparameterBIBREF24. We use padding <PAD> to make the slices of fixed size k(as shown in Figure FIGREF33). It provides each hidden state $h_t$ with sequence of $k$ previous words. Since the phrase of $k$ words can lie anywhere in the text it helps to model the position invariant phrase representation due to which the it identifies key phrases important for identifying particular category. In this case, the equation of $h_t$ is given as
The output hidden vectors, $H = [h_1, h_2, h_3, ...... h_n] \in R^{n \times 2L}$ are converted to fixed-length vector $h_{drnn} \in R^{2L}$ with max pooling over time:
Let's say $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Let's $z_2 \in R^{2L + 24}$ be another hidden state obtained as
where $\oplus $ denotes concatenation. The vector $z_2$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i2}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{drnn}$ is the weight matrix, and $b_{drnn}$ is the bias.
Methodology ::: Pooled BiLSTM
The architecture has been shown in Figure FIGREF44. Given a sequence $s_i = [x_{1}, x_{2}, x_{3}, ..... x_{j}]$, where $x_j \in R^d$ is the d-dimensional word vector for word $w_j$, the hidden state obtained after BiLSTM is given as
To avoid the loss of information because of modelling the entire sequence, we have concatenated the max-pooled($c_{max}$) and mean-pooled($c_{mean}$) representation of hidden states calculated over all time steps BIBREF25. We have also concatenated the nlp features, $f_{nlp} \in R^{24}$ the final feature vector $z_{3}$ is given as
where $\oplus $ denotes concatenation. The final feature $z_3$ vector is fed to the fully connected layer with softmax activation. Let $y_{i3}^*$ be the softmax probablities, specifically for class label $k$ given as:
where $K$ is the number of classes and $W_{bilstm}$ and $b_{bilstm}$ are the weight matrix and bias respectively.
Methodology ::: Classification Model
According to deep learning literature BIBREF46, BIBREF47, BIBREF48, unweighted averaging might be a reasonable ensemble for similar base learners of comparable performance. Now, similar to the information discussed in BIBREF21, we can compute the model averaging (unweighted) by combining the softmax probabilities of three different classification models obtained from equations DISPLAY_FORM32, DISPLAY_FORM43, DISPLAY_FORM48. The averaged class probabilities are computed as:
where K is the number of classes, and $\hat{y_i}$ is the predicted label for sentence $s_i$.
Experiment and Evaluation ::: Dataset Description
We have used two datasets in our experimental evaluations: (1) TRAC 2018 Dataset and (2) Kaggle Dataset.
TRAC 2018 Dataset: We have used the English code-mixed dataset provided by TRAC 2018. This dataset contains three labels, (a) Non-Aggressive(NAG), (b) Overtly-Aggressive (OAG) and (c) Covertly-Aggressive(CAG). The distribution of training, validation and test sets are described in Table TABREF56.
Kaggle Dataset: This dataset contains 20001 tweets which are manually labeled. The labels are divided into two categories (indicating presence or absence of aggression in tweets) AGG(Aggressive) or NAG(Non-Aggressive). We have used the same test split available in the baseline code. The distribution for each of the training and test is given in Table TABREF56.
Experiment and Evaluation ::: Experimental Setup
We have used Glove EmbeddingsBIBREF49 concatenated with FastText EmbeddingsBIBREF20 in all the three classification models presented in this paper. Specifically, we used Glove pre-trained vectors obtained from Twitter corpus containing 27 billion tokens and 1.2 million vocabulary entries where each word is represented using 100-dimensional vector. In the case of FastText the word is represented using 300-dimensional vector. Also, we have applied spatial dropoutBIBREF50 of 0.3 at embedding layer for DPCNN(in section SECREF30) and Pooled BiLSTM(in section SECREF45). For DPCNN model(in SECREF30) we have learnt 128-dimensional vector representation for unsupervised embeddings implicitly for task specific representation as in BIBREF23. Additionally, for DPCNN all the convolutional layers used 128 filters, kernel size of 3 and max-pooling stride 2. Additionally, in the case of DPCNN we have used kernel and bias regularizer of value 0.00001 for all convolutional kernels. The pre-activation function used in DPCNN is Parametric ReLU (PReLU) proposed in BIBREF44 while the activation at each of the convolutional kernel is linear. For, DRNN(in section SECREF34) we have used the window size of 8 and rest of the parameters related to LSTM units are same as given inBIBREF24. For, Pooled BiLSTM(in section SECREF45) we have used LSTM hidden units size as 256. The maximum sequence length is 200 in all three models. In each of the classification model the classification layer contains the fully connected layer with softmax activation with output size of 3 equal to number of classes in case of TRAC 2018 dataset and its 2 in case of Kaggle dataset. Training has been done using ADAM optimizerBIBREF51 for DPCNN and RMSPROPBIBREF52 for DRNN and Pooled Bi-LSTM models. All the models are trained end-to-end using softmax cross entropy lossBIBREF53 for TRAC 2018 dataset and binary cross entropy lossBIBREF53 for Kaggle dataset.
To train our model for TRAC 2018 dataset, we merged the training and validation dataset and then used 10% split from shuffled dataset to save the best model, for all classifiers. We have used only 20 NLP features (except TF-IDF Emoticon feature and Punctuation feature as given in Table TABREF25) for Kaggle dataset (as these are not present in the Kaggle dataset).
Experiment and Evaluation ::: Evaluation Strategy
To compare our experimental results we have used top-5 systems from the published results of TRAC-2018BIBREF5. To compare our results on Kaggle dataset, we have used the last & the best published result on Kaggle website as a baseline. We have conducted the separate experiments, to properly investigate the performance of (a) each of the classifiers (used in our model averaging based system), (b) impact of the NLP features on each of these classifiers and finally, (c) the performance of our proposed system. In Table TABREF57, TABREF57 and TABREF57, models, named as DPCNN(ref SECREF30), DRNN (ref SECREF34) and Pooled BiLSTM(ref SECREF45) are corresponding models without NLP features. Similarly, DPCNN+NLP Features, DRNN + NLP Features and Pooled BiLSTM + NLP Features are corresponding models with NLP features. The Model Averaging (A+B+C) is the ensemble of three models (i.e., model averaging of DPCNN, DRNN and Pooled BiLSTM) without NLP features. Finally, Our Proposed Method, which represents the model averaging of three models with NLP features.
Experiment and Evaluation ::: Results and Discussion
In this paper, we have evaluated our model using weighted macro-averaged F-score. The measure is defined as in (See BIBREF5, BIBREF2). It weights the F-score computed per class based on the class composition in the test set and then takes the average of these per-class F-score gives the final F-score. Table TABREF57, TABREF57 and TABREF57. presents the comparative experimental results for the proposed method in this paper with respect to the state-of-the-art. The top 5 modelsBIBREF5 given in Table TABREF57 and TABREF57. are the best performing models for Facebook and Twitter test dataset respectively on TRAC 2018. We have followed all the experimental guidelines as discussed in TRAC contest guideline paperBIBREF2, BIBREF5. From the results given in Table TABREF57, TABREF57 and TABREF57 it is clear that our proposed model shows the best performance among all of the approaches. These results also state that all the deep learning architectures with NLP features, perform better than individual corresponding deep learning architectures. This means NLP features, adds some value to the architectures, even if it is not very high.
Conclusion and Future Work
In this paper, we have briefly described the approach we have taken to solve the aggressive identification on online social media texts which is very challenging since the dataset is noisy and code-mixed. We presented an ensemble of deep learning models which outperform previous approaches by sufficient margin while having the ability to generalize across domains.
In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources). | Systems do not perform well both in Facebook and Twitter texts |
61404466cf86a21f0c1783ce535eb39a01528ce8 | 61404466cf86a21f0c1783ce535eb39a01528ce8_0 | Q: What are the key differences in communication styles between Twitter and Facebook?
Text: Introduction
The exponential increase of interactions on the various social media platforms has generated the huge amount of data on social media platforms like Facebook and Twitter, etc. These interactions resulted not only positive effect but also negative effect over billions of people owing to the fact that there are lots of aggressive comments (like hate, anger, and bullying). These cause not only mental and psychological stress but also account deactivation and even suicideBIBREF1. In this paper we concentrate on problems related to aggressiveness.
The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:
Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, "Well said sonu..you have courage to stand against dadagiri of Muslims".
Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, "Dear India, stop playing with the emotions of your people for votes."
Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive.
The additional discussion on aggressiveness task can be found in Kaggle task , which just divided the task into two classes - i.e., presence or absence of aggression in tweets.
The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset.
The massive increase of the social media data rendered the manual methods of content moderation difficult and costly. Machine Learning and Deep Learning methods to identify such phenomena have attracted more attention to the research community in recent yearsBIBREF4.
Based on the current context, we can divide the problem into three sub-problems: (a) detection of aggression levels, (b) handling code-mixed data and (c) handling styles (due to differences in social media platforms and text entry rules/restrictions).
A lot of the previous approachesBIBREF5 have used an ensemble model for the task. For example, some of them uses ensemble of statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9 some used ensemble of statistical and deep learning modelsBIBREF10, BIBREF11, BIBREF12 some used ensemble of deep learning models BIBREF13. There are approaches which proposed unified architecture based on deep learningBIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 while some proposed unified statistical modelBIBREF7. Additionally, there are some approaches uses data augmentation either through translation or labeling external data to make the model generalize across domainsBIBREF14, BIBREF10, BIBREF7.
Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:
Deep-Text Learning. The goal is to learn long range associations, dependencies between regions of text, N-grams, key-patterns, topical information, and sequential dependencies.
Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term "NLP Features" to represent it in the entire paper.
Dual embedding based on FastText and Glove. This dual embedding helps in high vocabulary coverage and to capture the rare and partially incorrect words in the text (specially by FastText BIBREF20).
Our "Deep-text architecture" uses model averaging strategy with three different deep learning architectures. Model averaging belongs to the family of ensemble learning techniques that uses multiple models for the same problem and combines their predictions to produce a more reliable and consistent prediction accuracy BIBREF21. This is the simplest form of weighted average ensemble based predictionBIBREF22 where, each ensemble member contribute equally to predictions. Specifically in our case, three different models have been used. The following contains the intuition behind the selection of these three models:
Deep Pyramid CNN BIBREF23 being deeper helps to learn long range associations between temporal regions of text using two-view embeddings.
Disconnected RNN BIBREF24 is very helpful in encoding the sequential information with temporal key patterns in the text.
Pooled BiLSTM In this architecture the last hidden state of BiLSTM is concatenated with mean and max-pooled representation of the hidden states obtained over all the time steps of Bi-LSTM. The idea of using mean and max pooling layers together is taken from BIBREF25 to avoid the loss of information in longer sequences of texts and max-pooling is taken to capture the topical informationBIBREF26.
NLP Features In each of the individual models, the NLP features are concatenated with last hidden state before the softmax classification layer as meta-data. The main aim is to provide additional information to the deep learning network.
The intuition behind the NLP features are the following:
Emotion Sensor Dataset We have introduced to use of emotion sensor features, as a meta-data information. We have obtained the word sensor dataset from Kaggle. In this dataset each word is statistically classified into 7 distinct classes (Disgust, Surprise, Neutral, Anger, Sad, Happy and Fear) using Naive Bayes, based on sentences collected from twitter and blogs.
Controlled Topical Signals from Empath. Empath can analyse the text across 200 gold standard topics and emotions. Additionally, it uses neural embedding to draw connotation among words across more than 1.8 billion words. We have used only selected categories like violence, hate, anger, aggression, social media and dispute from 200 Empath categories useful for us unlikeBIBREF12 which takes 194 categories.
Emoticons frequently used on social media indicates the sense of sentenceBIBREF17, BIBREF19, BIBREF9.
Normalized frequency of POS tags According to BIBREF12, BIBREF11, BIBREF7, BIBREF15 POS Tags provide the degree of target aggressiveness. LikeBIBREF12, we have used only four tags (a) adjective (JJ, JJR, JJS), (b) adverb (RB, RBR, RBS), (c) verb (VB, VBD, VBG, VBN, VBP, VBZ) and (d) noun (NN, NNS, NNP, NNPS) (See Penn-Treebank POS Tags for abbreviations and the full list). The main reason behind the selection of these four tags is to just identify words related to persons, activities, quality, etc, in the text.
Sentiment polarity obtained from VADER Sentiment Analysis BIBREF27 (positive, negative and neutral) like used in BIBREF15, BIBREF10, BIBREF11, BIBREF7. It helps to demarcate aggressiveness with non-aggressiveness in the text.
The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper.
Related work
There are several works for aggression identification submitted at TRAC 2018 among them some approaches use the ensemble of multiple statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9. Similarly, some of the models likeBIBREF10, BIBREF11, BIBREF12 have used ensemble of statistical and deep learning models. In these models the statistical part of the model uses additional features from text analysis like parts-of-speech tags, punctuation, emotion, emoticon etc. Model like: BIBREF13 has used the ensemble of deep learning models based on majority voting.
Some other models like: BIBREF28, BIBREF12, BIBREF9 have used different models for Facebook and twitter. While approaches like:BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 have proposed unified architecture based on deep learning. Systems likeBIBREF14, BIBREF10, BIBREF7 have used data augmentation either through translation or labelling external data to make the model generalize across domains. While BIBREF7 has proposed a unified statistical model.
Among approaches likeBIBREF6 extracted features from TF-IDF of character n-grams whileBIBREF28 uses LSTM with pre-trained embeddings from FastText. BIBREF15 have used the BiLSTM based model and the SVM metaclassifier model for the Facebook and Twitter test sets, respectively. While BIBREF13 tried ensembling of CNN, LSTM, and BILSTM.
Some approaches like:BIBREF12 has used emotions frequency as one of the features, while some others use sentiment emotion as featureBIBREF11. Also,BIBREF17, BIBREF19 have converted emoticons to their description. BIBREF9 have used TF-IDF of emoticons per-class as one of the features. Compared to all these approaches, we have concentrated to capture multiple linguistic/pattern based relations, key-terms and key-patters (with their association in text) through a combination of deep learning architectures with model averaging. We have also used NLP features as additional features with our deep learning architecture, obtained from psycho-linguistic and basic linguistic features.
Methodology
In this section, we describe our system architecture for aggressiveness classifier. In section SECREF23 we describe data preprocessing applied on the input text before feeding it to each of the classification models. Section SECREF26 describes the computation of NLP features. In Sections SECREF30, SECREF34 and SECREF45 we have described the architecture of different deep learning models like Deep Pyramid CNN, Disconnected RNN and Pooled BiLSTM respectively. Finally, in Section SECREF49, we describe model averaging based classification model which combines the prediction probabilities from three deep learninig architectures discussed above. (see Figure FIGREF22. for block diagram of system architecture).
Methodology ::: Data Preprocessing
We consider the text to be well formatted before applying the text to the embedding layer. First, we detect non-English text(which are few) and translate all of them to English using Google Translate. Still, there is some code mixed words like "mc", "bc" and other English abbreviations and spelling errors like "nd" in place of "and", "u" in place of "you" causes deep learning model to confuse with sentences of the same meaning. We follow the strategy of preprocessor as inBIBREF17 to normalize the abbreviations and remove spelling errors, URLs and punctuation marks, converting emojis to their description.
https://spacy.io/usage/linguistic-features#pos-tagging
Methodology ::: NLP Features
We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).
Different from previous approachesBIBREF8, BIBREF12 where BIBREF12 have used Emotion features in the form of frequency while BIBREF8 have used emotion feature vector obtained from LIWC 2007BIBREF30. UnlikeBIBREF12 we have used only 6 topical signals from EmapthBIBREF29. We have borrowed the idea of using other features like punctuation features and parts-of-speech tags from BIBREF12. The Table 1. lists and describes features, tools used to obtain them and the number of features resulted from each type.
Methodology ::: Deep Pyramid CNN(DPCNN)
Since it has been proved that CNNs are great feature extractors for text classificationBIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF23 while deeper networks(whether RNNs or CNN's) has been proven for learning long-range association like deeper character level CNN'sBIBREF36, BIBREF37, and complex combination of RNN and CNNBIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42. Deep Pyramid CNN (DPCNN)BIBREF23 has 15 layers of word-level CNN's and contains similar pre-activation as proposed in improved ResnetBIBREF43. DPCNN outperforms the 32-layer character CNNBIBREF37 and Hierarchical attention networksBIBREF42 it has added advantage that due to its pyramid structure it does not require dimension matching in shortcut connections defined as z + h(z) as inBIBREF43 where h(z) represents the skipped layers essentially contains two convolutional layers with pre-activation. It uses enhanced region embedding which consumes pre-trained embeddings (in our case it is FastText+Glove based dual embedding).
Enhanced Region Embedding. The current DPCNNBIBREF23, uses two view type enhanced region embedding. For the text categorization, it defines a region of text as view-1 and its adjacent regions as view-2. Then using unlabeled data, it trains a neural network of one hidden layer with an artificial task of predicting view-2 from view-1. The obtained hidden layer, which is an embedding function that takes view-1 as input, serves as an unsupervised embedding function in the model for text categorization. The detailed architecture has been shown in Figure FIGREF29.
Let each word input $x_j \in R^d$ be the d-dimensional vector for the $j^{th}$ word $w_{j}$ and the sentence $s_i$ contains sequence of $n$ words $\lbrace w_{1},w_{2},w_{3},......,w_{n}\rbrace $ as shown in Figure FIGREF29. In comparision to conventional convolution layer, DPCNN proposes to use pre-activation, thus essentially the convolutional layer of DPCNN is $\textbf {W}\sigma (\textbf {x})+\textbf {b}$, where $\textbf {W}$ and $\textbf {b}$(unique to each layer) are the weights matrix and bias respectively, we use $\sigma $ as PReLUBIBREF44. During implementation we use kernel size of 3(represented by $\textbf {x}$ to denote the small overlapping regions of text.), The number of filters(number of feature maps denoted by the number of rows of $\textbf {W}$) is 128 as depicted in Figure FIGREF29. With the number of filters same in each convolution layer and max-pooling with stride 2 makes the computation time halved, and doubles the net coverage of convolution kernel. Thus the deeper layers cause to learn long-range associations between regions of text. Let's say $h_{dpcnn} \in R^{p_1}$ be the hidden state obtained from DPCNN just before the classification layer and $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Lets $z_1 \in R^{p_1 + 24}$ be another hidden state obtained as
where, $\oplus $ denotes concatenation. The vector $z_1$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i1}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{dpcnn}$ and $b_{dpcnn}$ are the weight matrix and bias respectively.
Methodology ::: Disconnected RNN(DRNN)
Given a sequence $s_i = [x_{1}, x_{2}, x_{3},....x_{n}]$ where $x_{j} \in R^d$ represents the d-dimensional word vector for word $w_{j}$ and $n$ is the length of input text applied to a variant of RNN called Long Short-Term Memory (LSTM)BIBREF45 as shown in Figure FIGREF33. It is widely used for sequential modelling with long-term dependencies. For sequence modelling it keeps on updating the memory cell with current input using an adaptive gating mechanism. At time step $t$ the memory $c_t$ and the hidden state $h_t$ are updated as follows:
where $\hat{c}_t$ is the current cell state obtained from current input $x_t$ and previous hidden state $h_{t-1}$, $i_t$, $f_t$ and $o_t$ are the activation corresponding to input gate, forget gate and output gate respectively, $\sigma $ denotes the logistic sigmoid function and $\odot $ denotes the element-wise multiplication. Hence the hidden state representation at time step $t$ depends on all the previous input vectors given as
Specifically we have used Bi-directional LSTM BIBREF45 to capture both past and future context. It provides $h_t$ from both directions(forward & backward). The forward LSTM takes the natural order of words from $x_{1}$ to $x_{n}$ to obtain $\overrightarrow{h_t}$, while backward-LSTM $x_{n}$ to $x_{1}$ to obtain $\overleftarrow{h_t}$. then $h_t$ is calculated as
where $\oplus $ is the concatenation and $L$ is the size for one-directional LSTM. Therefore we denote the hidden state in equation DISPLAY_FORM37 with BiLSTM as
To avoid handling of long sequence and to capture local information for each word we define the window size $k$ for each word such that the BiLSTM only sees the the previous $k-1$ words with the current word, where $k$ is a hyperparameterBIBREF24. We use padding <PAD> to make the slices of fixed size k(as shown in Figure FIGREF33). It provides each hidden state $h_t$ with sequence of $k$ previous words. Since the phrase of $k$ words can lie anywhere in the text it helps to model the position invariant phrase representation due to which the it identifies key phrases important for identifying particular category. In this case, the equation of $h_t$ is given as
The output hidden vectors, $H = [h_1, h_2, h_3, ...... h_n] \in R^{n \times 2L}$ are converted to fixed-length vector $h_{drnn} \in R^{2L}$ with max pooling over time:
Let's say $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Let's $z_2 \in R^{2L + 24}$ be another hidden state obtained as
where $\oplus $ denotes concatenation. The vector $z_2$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i2}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{drnn}$ is the weight matrix, and $b_{drnn}$ is the bias.
Methodology ::: Pooled BiLSTM
The architecture has been shown in Figure FIGREF44. Given a sequence $s_i = [x_{1}, x_{2}, x_{3}, ..... x_{j}]$, where $x_j \in R^d$ is the d-dimensional word vector for word $w_j$, the hidden state obtained after BiLSTM is given as
To avoid the loss of information because of modelling the entire sequence, we have concatenated the max-pooled($c_{max}$) and mean-pooled($c_{mean}$) representation of hidden states calculated over all time steps BIBREF25. We have also concatenated the nlp features, $f_{nlp} \in R^{24}$ the final feature vector $z_{3}$ is given as
where $\oplus $ denotes concatenation. The final feature $z_3$ vector is fed to the fully connected layer with softmax activation. Let $y_{i3}^*$ be the softmax probablities, specifically for class label $k$ given as:
where $K$ is the number of classes and $W_{bilstm}$ and $b_{bilstm}$ are the weight matrix and bias respectively.
Methodology ::: Classification Model
According to deep learning literature BIBREF46, BIBREF47, BIBREF48, unweighted averaging might be a reasonable ensemble for similar base learners of comparable performance. Now, similar to the information discussed in BIBREF21, we can compute the model averaging (unweighted) by combining the softmax probabilities of three different classification models obtained from equations DISPLAY_FORM32, DISPLAY_FORM43, DISPLAY_FORM48. The averaged class probabilities are computed as:
where K is the number of classes, and $\hat{y_i}$ is the predicted label for sentence $s_i$.
Experiment and Evaluation ::: Dataset Description
We have used two datasets in our experimental evaluations: (1) TRAC 2018 Dataset and (2) Kaggle Dataset.
TRAC 2018 Dataset: We have used the English code-mixed dataset provided by TRAC 2018. This dataset contains three labels, (a) Non-Aggressive(NAG), (b) Overtly-Aggressive (OAG) and (c) Covertly-Aggressive(CAG). The distribution of training, validation and test sets are described in Table TABREF56.
Kaggle Dataset: This dataset contains 20001 tweets which are manually labeled. The labels are divided into two categories (indicating presence or absence of aggression in tweets) AGG(Aggressive) or NAG(Non-Aggressive). We have used the same test split available in the baseline code. The distribution for each of the training and test is given in Table TABREF56.
Experiment and Evaluation ::: Experimental Setup
We have used Glove EmbeddingsBIBREF49 concatenated with FastText EmbeddingsBIBREF20 in all the three classification models presented in this paper. Specifically, we used Glove pre-trained vectors obtained from Twitter corpus containing 27 billion tokens and 1.2 million vocabulary entries where each word is represented using 100-dimensional vector. In the case of FastText the word is represented using 300-dimensional vector. Also, we have applied spatial dropoutBIBREF50 of 0.3 at embedding layer for DPCNN(in section SECREF30) and Pooled BiLSTM(in section SECREF45). For DPCNN model(in SECREF30) we have learnt 128-dimensional vector representation for unsupervised embeddings implicitly for task specific representation as in BIBREF23. Additionally, for DPCNN all the convolutional layers used 128 filters, kernel size of 3 and max-pooling stride 2. Additionally, in the case of DPCNN we have used kernel and bias regularizer of value 0.00001 for all convolutional kernels. The pre-activation function used in DPCNN is Parametric ReLU (PReLU) proposed in BIBREF44 while the activation at each of the convolutional kernel is linear. For, DRNN(in section SECREF34) we have used the window size of 8 and rest of the parameters related to LSTM units are same as given inBIBREF24. For, Pooled BiLSTM(in section SECREF45) we have used LSTM hidden units size as 256. The maximum sequence length is 200 in all three models. In each of the classification model the classification layer contains the fully connected layer with softmax activation with output size of 3 equal to number of classes in case of TRAC 2018 dataset and its 2 in case of Kaggle dataset. Training has been done using ADAM optimizerBIBREF51 for DPCNN and RMSPROPBIBREF52 for DRNN and Pooled Bi-LSTM models. All the models are trained end-to-end using softmax cross entropy lossBIBREF53 for TRAC 2018 dataset and binary cross entropy lossBIBREF53 for Kaggle dataset.
To train our model for TRAC 2018 dataset, we merged the training and validation dataset and then used 10% split from shuffled dataset to save the best model, for all classifiers. We have used only 20 NLP features (except TF-IDF Emoticon feature and Punctuation feature as given in Table TABREF25) for Kaggle dataset (as these are not present in the Kaggle dataset).
Experiment and Evaluation ::: Evaluation Strategy
To compare our experimental results we have used top-5 systems from the published results of TRAC-2018BIBREF5. To compare our results on Kaggle dataset, we have used the last & the best published result on Kaggle website as a baseline. We have conducted the separate experiments, to properly investigate the performance of (a) each of the classifiers (used in our model averaging based system), (b) impact of the NLP features on each of these classifiers and finally, (c) the performance of our proposed system. In Table TABREF57, TABREF57 and TABREF57, models, named as DPCNN(ref SECREF30), DRNN (ref SECREF34) and Pooled BiLSTM(ref SECREF45) are corresponding models without NLP features. Similarly, DPCNN+NLP Features, DRNN + NLP Features and Pooled BiLSTM + NLP Features are corresponding models with NLP features. The Model Averaging (A+B+C) is the ensemble of three models (i.e., model averaging of DPCNN, DRNN and Pooled BiLSTM) without NLP features. Finally, Our Proposed Method, which represents the model averaging of three models with NLP features.
Experiment and Evaluation ::: Results and Discussion
In this paper, we have evaluated our model using weighted macro-averaged F-score. The measure is defined as in (See BIBREF5, BIBREF2). It weights the F-score computed per class based on the class composition in the test set and then takes the average of these per-class F-score gives the final F-score. Table TABREF57, TABREF57 and TABREF57. presents the comparative experimental results for the proposed method in this paper with respect to the state-of-the-art. The top 5 modelsBIBREF5 given in Table TABREF57 and TABREF57. are the best performing models for Facebook and Twitter test dataset respectively on TRAC 2018. We have followed all the experimental guidelines as discussed in TRAC contest guideline paperBIBREF2, BIBREF5. From the results given in Table TABREF57, TABREF57 and TABREF57 it is clear that our proposed model shows the best performance among all of the approaches. These results also state that all the deep learning architectures with NLP features, perform better than individual corresponding deep learning architectures. This means NLP features, adds some value to the architectures, even if it is not very high.
Conclusion and Future Work
In this paper, we have briefly described the approach we have taken to solve the aggressive identification on online social media texts which is very challenging since the dataset is noisy and code-mixed. We presented an ensemble of deep learning models which outperform previous approaches by sufficient margin while having the ability to generalize across domains.
In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources). | Unanswerable |
fbe5e513745d723aad711ceb91ce0c3c2ceb669e | fbe5e513745d723aad711ceb91ce0c3c2ceb669e_0 | Q: What data/studies do the authors provide to support the assertion that the majority of aggressive conversations contain code-mixed languages?
Text: Introduction
The exponential increase of interactions on the various social media platforms has generated the huge amount of data on social media platforms like Facebook and Twitter, etc. These interactions resulted not only positive effect but also negative effect over billions of people owing to the fact that there are lots of aggressive comments (like hate, anger, and bullying). These cause not only mental and psychological stress but also account deactivation and even suicideBIBREF1. In this paper we concentrate on problems related to aggressiveness.
The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:
Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, "Well said sonu..you have courage to stand against dadagiri of Muslims".
Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, "Dear India, stop playing with the emotions of your people for votes."
Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive.
The additional discussion on aggressiveness task can be found in Kaggle task , which just divided the task into two classes - i.e., presence or absence of aggression in tweets.
The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset.
The massive increase of the social media data rendered the manual methods of content moderation difficult and costly. Machine Learning and Deep Learning methods to identify such phenomena have attracted more attention to the research community in recent yearsBIBREF4.
Based on the current context, we can divide the problem into three sub-problems: (a) detection of aggression levels, (b) handling code-mixed data and (c) handling styles (due to differences in social media platforms and text entry rules/restrictions).
A lot of the previous approachesBIBREF5 have used an ensemble model for the task. For example, some of them uses ensemble of statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9 some used ensemble of statistical and deep learning modelsBIBREF10, BIBREF11, BIBREF12 some used ensemble of deep learning models BIBREF13. There are approaches which proposed unified architecture based on deep learningBIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 while some proposed unified statistical modelBIBREF7. Additionally, there are some approaches uses data augmentation either through translation or labeling external data to make the model generalize across domainsBIBREF14, BIBREF10, BIBREF7.
Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:
Deep-Text Learning. The goal is to learn long range associations, dependencies between regions of text, N-grams, key-patterns, topical information, and sequential dependencies.
Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term "NLP Features" to represent it in the entire paper.
Dual embedding based on FastText and Glove. This dual embedding helps in high vocabulary coverage and to capture the rare and partially incorrect words in the text (specially by FastText BIBREF20).
Our "Deep-text architecture" uses model averaging strategy with three different deep learning architectures. Model averaging belongs to the family of ensemble learning techniques that uses multiple models for the same problem and combines their predictions to produce a more reliable and consistent prediction accuracy BIBREF21. This is the simplest form of weighted average ensemble based predictionBIBREF22 where, each ensemble member contribute equally to predictions. Specifically in our case, three different models have been used. The following contains the intuition behind the selection of these three models:
Deep Pyramid CNN BIBREF23 being deeper helps to learn long range associations between temporal regions of text using two-view embeddings.
Disconnected RNN BIBREF24 is very helpful in encoding the sequential information with temporal key patterns in the text.
Pooled BiLSTM In this architecture the last hidden state of BiLSTM is concatenated with mean and max-pooled representation of the hidden states obtained over all the time steps of Bi-LSTM. The idea of using mean and max pooling layers together is taken from BIBREF25 to avoid the loss of information in longer sequences of texts and max-pooling is taken to capture the topical informationBIBREF26.
NLP Features In each of the individual models, the NLP features are concatenated with last hidden state before the softmax classification layer as meta-data. The main aim is to provide additional information to the deep learning network.
The intuition behind the NLP features are the following:
Emotion Sensor Dataset We have introduced to use of emotion sensor features, as a meta-data information. We have obtained the word sensor dataset from Kaggle. In this dataset each word is statistically classified into 7 distinct classes (Disgust, Surprise, Neutral, Anger, Sad, Happy and Fear) using Naive Bayes, based on sentences collected from twitter and blogs.
Controlled Topical Signals from Empath. Empath can analyse the text across 200 gold standard topics and emotions. Additionally, it uses neural embedding to draw connotation among words across more than 1.8 billion words. We have used only selected categories like violence, hate, anger, aggression, social media and dispute from 200 Empath categories useful for us unlikeBIBREF12 which takes 194 categories.
Emoticons frequently used on social media indicates the sense of sentenceBIBREF17, BIBREF19, BIBREF9.
Normalized frequency of POS tags According to BIBREF12, BIBREF11, BIBREF7, BIBREF15 POS Tags provide the degree of target aggressiveness. LikeBIBREF12, we have used only four tags (a) adjective (JJ, JJR, JJS), (b) adverb (RB, RBR, RBS), (c) verb (VB, VBD, VBG, VBN, VBP, VBZ) and (d) noun (NN, NNS, NNP, NNPS) (See Penn-Treebank POS Tags for abbreviations and the full list). The main reason behind the selection of these four tags is to just identify words related to persons, activities, quality, etc, in the text.
Sentiment polarity obtained from VADER Sentiment Analysis BIBREF27 (positive, negative and neutral) like used in BIBREF15, BIBREF10, BIBREF11, BIBREF7. It helps to demarcate aggressiveness with non-aggressiveness in the text.
The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper.
Related work
There are several works for aggression identification submitted at TRAC 2018 among them some approaches use the ensemble of multiple statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9. Similarly, some of the models likeBIBREF10, BIBREF11, BIBREF12 have used ensemble of statistical and deep learning models. In these models the statistical part of the model uses additional features from text analysis like parts-of-speech tags, punctuation, emotion, emoticon etc. Model like: BIBREF13 has used the ensemble of deep learning models based on majority voting.
Some other models like: BIBREF28, BIBREF12, BIBREF9 have used different models for Facebook and twitter. While approaches like:BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 have proposed unified architecture based on deep learning. Systems likeBIBREF14, BIBREF10, BIBREF7 have used data augmentation either through translation or labelling external data to make the model generalize across domains. While BIBREF7 has proposed a unified statistical model.
Among approaches likeBIBREF6 extracted features from TF-IDF of character n-grams whileBIBREF28 uses LSTM with pre-trained embeddings from FastText. BIBREF15 have used the BiLSTM based model and the SVM metaclassifier model for the Facebook and Twitter test sets, respectively. While BIBREF13 tried ensembling of CNN, LSTM, and BILSTM.
Some approaches like:BIBREF12 has used emotions frequency as one of the features, while some others use sentiment emotion as featureBIBREF11. Also,BIBREF17, BIBREF19 have converted emoticons to their description. BIBREF9 have used TF-IDF of emoticons per-class as one of the features. Compared to all these approaches, we have concentrated to capture multiple linguistic/pattern based relations, key-terms and key-patters (with their association in text) through a combination of deep learning architectures with model averaging. We have also used NLP features as additional features with our deep learning architecture, obtained from psycho-linguistic and basic linguistic features.
Methodology
In this section, we describe our system architecture for aggressiveness classifier. In section SECREF23 we describe data preprocessing applied on the input text before feeding it to each of the classification models. Section SECREF26 describes the computation of NLP features. In Sections SECREF30, SECREF34 and SECREF45 we have described the architecture of different deep learning models like Deep Pyramid CNN, Disconnected RNN and Pooled BiLSTM respectively. Finally, in Section SECREF49, we describe model averaging based classification model which combines the prediction probabilities from three deep learninig architectures discussed above. (see Figure FIGREF22. for block diagram of system architecture).
Methodology ::: Data Preprocessing
We consider the text to be well formatted before applying the text to the embedding layer. First, we detect non-English text(which are few) and translate all of them to English using Google Translate. Still, there is some code mixed words like "mc", "bc" and other English abbreviations and spelling errors like "nd" in place of "and", "u" in place of "you" causes deep learning model to confuse with sentences of the same meaning. We follow the strategy of preprocessor as inBIBREF17 to normalize the abbreviations and remove spelling errors, URLs and punctuation marks, converting emojis to their description.
https://spacy.io/usage/linguistic-features#pos-tagging
Methodology ::: NLP Features
We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).
Different from previous approachesBIBREF8, BIBREF12 where BIBREF12 have used Emotion features in the form of frequency while BIBREF8 have used emotion feature vector obtained from LIWC 2007BIBREF30. UnlikeBIBREF12 we have used only 6 topical signals from EmapthBIBREF29. We have borrowed the idea of using other features like punctuation features and parts-of-speech tags from BIBREF12. The Table 1. lists and describes features, tools used to obtain them and the number of features resulted from each type.
Methodology ::: Deep Pyramid CNN(DPCNN)
Since it has been proved that CNNs are great feature extractors for text classificationBIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF23 while deeper networks(whether RNNs or CNN's) has been proven for learning long-range association like deeper character level CNN'sBIBREF36, BIBREF37, and complex combination of RNN and CNNBIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42. Deep Pyramid CNN (DPCNN)BIBREF23 has 15 layers of word-level CNN's and contains similar pre-activation as proposed in improved ResnetBIBREF43. DPCNN outperforms the 32-layer character CNNBIBREF37 and Hierarchical attention networksBIBREF42 it has added advantage that due to its pyramid structure it does not require dimension matching in shortcut connections defined as z + h(z) as inBIBREF43 where h(z) represents the skipped layers essentially contains two convolutional layers with pre-activation. It uses enhanced region embedding which consumes pre-trained embeddings (in our case it is FastText+Glove based dual embedding).
Enhanced Region Embedding. The current DPCNNBIBREF23, uses two view type enhanced region embedding. For the text categorization, it defines a region of text as view-1 and its adjacent regions as view-2. Then using unlabeled data, it trains a neural network of one hidden layer with an artificial task of predicting view-2 from view-1. The obtained hidden layer, which is an embedding function that takes view-1 as input, serves as an unsupervised embedding function in the model for text categorization. The detailed architecture has been shown in Figure FIGREF29.
Let each word input $x_j \in R^d$ be the d-dimensional vector for the $j^{th}$ word $w_{j}$ and the sentence $s_i$ contains sequence of $n$ words $\lbrace w_{1},w_{2},w_{3},......,w_{n}\rbrace $ as shown in Figure FIGREF29. In comparision to conventional convolution layer, DPCNN proposes to use pre-activation, thus essentially the convolutional layer of DPCNN is $\textbf {W}\sigma (\textbf {x})+\textbf {b}$, where $\textbf {W}$ and $\textbf {b}$(unique to each layer) are the weights matrix and bias respectively, we use $\sigma $ as PReLUBIBREF44. During implementation we use kernel size of 3(represented by $\textbf {x}$ to denote the small overlapping regions of text.), The number of filters(number of feature maps denoted by the number of rows of $\textbf {W}$) is 128 as depicted in Figure FIGREF29. With the number of filters same in each convolution layer and max-pooling with stride 2 makes the computation time halved, and doubles the net coverage of convolution kernel. Thus the deeper layers cause to learn long-range associations between regions of text. Let's say $h_{dpcnn} \in R^{p_1}$ be the hidden state obtained from DPCNN just before the classification layer and $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Lets $z_1 \in R^{p_1 + 24}$ be another hidden state obtained as
where, $\oplus $ denotes concatenation. The vector $z_1$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i1}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{dpcnn}$ and $b_{dpcnn}$ are the weight matrix and bias respectively.
Methodology ::: Disconnected RNN(DRNN)
Given a sequence $s_i = [x_{1}, x_{2}, x_{3},....x_{n}]$ where $x_{j} \in R^d$ represents the d-dimensional word vector for word $w_{j}$ and $n$ is the length of input text applied to a variant of RNN called Long Short-Term Memory (LSTM)BIBREF45 as shown in Figure FIGREF33. It is widely used for sequential modelling with long-term dependencies. For sequence modelling it keeps on updating the memory cell with current input using an adaptive gating mechanism. At time step $t$ the memory $c_t$ and the hidden state $h_t$ are updated as follows:
where $\hat{c}_t$ is the current cell state obtained from current input $x_t$ and previous hidden state $h_{t-1}$, $i_t$, $f_t$ and $o_t$ are the activation corresponding to input gate, forget gate and output gate respectively, $\sigma $ denotes the logistic sigmoid function and $\odot $ denotes the element-wise multiplication. Hence the hidden state representation at time step $t$ depends on all the previous input vectors given as
Specifically we have used Bi-directional LSTM BIBREF45 to capture both past and future context. It provides $h_t$ from both directions(forward & backward). The forward LSTM takes the natural order of words from $x_{1}$ to $x_{n}$ to obtain $\overrightarrow{h_t}$, while backward-LSTM $x_{n}$ to $x_{1}$ to obtain $\overleftarrow{h_t}$. then $h_t$ is calculated as
where $\oplus $ is the concatenation and $L$ is the size for one-directional LSTM. Therefore we denote the hidden state in equation DISPLAY_FORM37 with BiLSTM as
To avoid handling of long sequence and to capture local information for each word we define the window size $k$ for each word such that the BiLSTM only sees the the previous $k-1$ words with the current word, where $k$ is a hyperparameterBIBREF24. We use padding <PAD> to make the slices of fixed size k(as shown in Figure FIGREF33). It provides each hidden state $h_t$ with sequence of $k$ previous words. Since the phrase of $k$ words can lie anywhere in the text it helps to model the position invariant phrase representation due to which the it identifies key phrases important for identifying particular category. In this case, the equation of $h_t$ is given as
The output hidden vectors, $H = [h_1, h_2, h_3, ...... h_n] \in R^{n \times 2L}$ are converted to fixed-length vector $h_{drnn} \in R^{2L}$ with max pooling over time:
Let's say $f_{nlp} \in R^{24}$ be the NLP features computed from the text. Let's $z_2 \in R^{2L + 24}$ be another hidden state obtained as
where $\oplus $ denotes concatenation. The vector $z_2$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i2}^*$ be the softmax probabilities, specifically for class label $k$ is given as:
where $K$ is the number of classes, $W_{drnn}$ is the weight matrix, and $b_{drnn}$ is the bias.
Methodology ::: Pooled BiLSTM
The architecture has been shown in Figure FIGREF44. Given a sequence $s_i = [x_{1}, x_{2}, x_{3}, ..... x_{j}]$, where $x_j \in R^d$ is the d-dimensional word vector for word $w_j$, the hidden state obtained after BiLSTM is given as
To avoid the loss of information because of modelling the entire sequence, we have concatenated the max-pooled($c_{max}$) and mean-pooled($c_{mean}$) representation of hidden states calculated over all time steps BIBREF25. We have also concatenated the nlp features, $f_{nlp} \in R^{24}$ the final feature vector $z_{3}$ is given as
where $\oplus $ denotes concatenation. The final feature $z_3$ vector is fed to the fully connected layer with softmax activation. Let $y_{i3}^*$ be the softmax probablities, specifically for class label $k$ given as:
where $K$ is the number of classes and $W_{bilstm}$ and $b_{bilstm}$ are the weight matrix and bias respectively.
Methodology ::: Classification Model
According to deep learning literature BIBREF46, BIBREF47, BIBREF48, unweighted averaging might be a reasonable ensemble for similar base learners of comparable performance. Now, similar to the information discussed in BIBREF21, we can compute the model averaging (unweighted) by combining the softmax probabilities of three different classification models obtained from equations DISPLAY_FORM32, DISPLAY_FORM43, DISPLAY_FORM48. The averaged class probabilities are computed as:
where K is the number of classes, and $\hat{y_i}$ is the predicted label for sentence $s_i$.
Experiment and Evaluation ::: Dataset Description
We have used two datasets in our experimental evaluations: (1) TRAC 2018 Dataset and (2) Kaggle Dataset.
TRAC 2018 Dataset: We have used the English code-mixed dataset provided by TRAC 2018. This dataset contains three labels, (a) Non-Aggressive(NAG), (b) Overtly-Aggressive (OAG) and (c) Covertly-Aggressive(CAG). The distribution of training, validation and test sets are described in Table TABREF56.
Kaggle Dataset: This dataset contains 20001 tweets which are manually labeled. The labels are divided into two categories (indicating presence or absence of aggression in tweets) AGG(Aggressive) or NAG(Non-Aggressive). We have used the same test split available in the baseline code. The distribution for each of the training and test is given in Table TABREF56.
Experiment and Evaluation ::: Experimental Setup
We have used Glove EmbeddingsBIBREF49 concatenated with FastText EmbeddingsBIBREF20 in all the three classification models presented in this paper. Specifically, we used Glove pre-trained vectors obtained from Twitter corpus containing 27 billion tokens and 1.2 million vocabulary entries where each word is represented using 100-dimensional vector. In the case of FastText the word is represented using 300-dimensional vector. Also, we have applied spatial dropoutBIBREF50 of 0.3 at embedding layer for DPCNN(in section SECREF30) and Pooled BiLSTM(in section SECREF45). For DPCNN model(in SECREF30) we have learnt 128-dimensional vector representation for unsupervised embeddings implicitly for task specific representation as in BIBREF23. Additionally, for DPCNN all the convolutional layers used 128 filters, kernel size of 3 and max-pooling stride 2. Additionally, in the case of DPCNN we have used kernel and bias regularizer of value 0.00001 for all convolutional kernels. The pre-activation function used in DPCNN is Parametric ReLU (PReLU) proposed in BIBREF44 while the activation at each of the convolutional kernel is linear. For, DRNN(in section SECREF34) we have used the window size of 8 and rest of the parameters related to LSTM units are same as given inBIBREF24. For, Pooled BiLSTM(in section SECREF45) we have used LSTM hidden units size as 256. The maximum sequence length is 200 in all three models. In each of the classification model the classification layer contains the fully connected layer with softmax activation with output size of 3 equal to number of classes in case of TRAC 2018 dataset and its 2 in case of Kaggle dataset. Training has been done using ADAM optimizerBIBREF51 for DPCNN and RMSPROPBIBREF52 for DRNN and Pooled Bi-LSTM models. All the models are trained end-to-end using softmax cross entropy lossBIBREF53 for TRAC 2018 dataset and binary cross entropy lossBIBREF53 for Kaggle dataset.
To train our model for TRAC 2018 dataset, we merged the training and validation dataset and then used 10% split from shuffled dataset to save the best model, for all classifiers. We have used only 20 NLP features (except TF-IDF Emoticon feature and Punctuation feature as given in Table TABREF25) for Kaggle dataset (as these are not present in the Kaggle dataset).
Experiment and Evaluation ::: Evaluation Strategy
To compare our experimental results we have used top-5 systems from the published results of TRAC-2018BIBREF5. To compare our results on Kaggle dataset, we have used the last & the best published result on Kaggle website as a baseline. We have conducted the separate experiments, to properly investigate the performance of (a) each of the classifiers (used in our model averaging based system), (b) impact of the NLP features on each of these classifiers and finally, (c) the performance of our proposed system. In Table TABREF57, TABREF57 and TABREF57, models, named as DPCNN(ref SECREF30), DRNN (ref SECREF34) and Pooled BiLSTM(ref SECREF45) are corresponding models without NLP features. Similarly, DPCNN+NLP Features, DRNN + NLP Features and Pooled BiLSTM + NLP Features are corresponding models with NLP features. The Model Averaging (A+B+C) is the ensemble of three models (i.e., model averaging of DPCNN, DRNN and Pooled BiLSTM) without NLP features. Finally, Our Proposed Method, which represents the model averaging of three models with NLP features.
Experiment and Evaluation ::: Results and Discussion
In this paper, we have evaluated our model using weighted macro-averaged F-score. The measure is defined as in (See BIBREF5, BIBREF2). It weights the F-score computed per class based on the class composition in the test set and then takes the average of these per-class F-score gives the final F-score. Table TABREF57, TABREF57 and TABREF57. presents the comparative experimental results for the proposed method in this paper with respect to the state-of-the-art. The top 5 modelsBIBREF5 given in Table TABREF57 and TABREF57. are the best performing models for Facebook and Twitter test dataset respectively on TRAC 2018. We have followed all the experimental guidelines as discussed in TRAC contest guideline paperBIBREF2, BIBREF5. From the results given in Table TABREF57, TABREF57 and TABREF57 it is clear that our proposed model shows the best performance among all of the approaches. These results also state that all the deep learning architectures with NLP features, perform better than individual corresponding deep learning architectures. This means NLP features, adds some value to the architectures, even if it is not very high.
Conclusion and Future Work
In this paper, we have briefly described the approach we have taken to solve the aggressive identification on online social media texts which is very challenging since the dataset is noisy and code-mixed. We presented an ensemble of deep learning models which outperform previous approaches by sufficient margin while having the ability to generalize across domains.
In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources). | None |
1571e16063b53409f2d1bd6ec143fccc5b29ebb9 | 1571e16063b53409f2d1bd6ec143fccc5b29ebb9_0 | Q: What is the baseline?
Text: Introduction
With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections.
False information is categorized into 8 types according to BIBREF1. Some of these types are intentional to deceive where others are not. In this work, we are interested in analyzing 4 main types, i.e. hoaxes, propagandas, clickbaits, and satires. These types can be classified into two main categories - misinformation and disinformation - where misinformation considers false information that is published without the intent to deceive (e.g. satire). Disinformation can be seen as a specific kind of false information with the aim to mislead the reader (e.g. hoax, propaganda, and clickbait). Propagandas are fabricated stories spread to harm the interest of a particular party. Hoaxes are similar to propagandas but the main aim of the writer is not to manipulate the readers' opinions but to convince them of the validity of a paranoia-fueled story BIBREF2. Clickbait is another type of disinformation that refers to the deliberate use of misleading headlines, thumbnails, or stories' snippets to redirect attention (for traffic attention). Satire is the only type of misinformation, where the writer's main purpose is not to mislead the reader, but rather to deliver the story in an ironic way (to entertain or to be sarcastic).
The topic of fake news is gaining attention due to its risky consequences. A vast set of campaigns has been organized to tackle fake news. The owner of Wikipedia encyclopedia created the news site WikiTribune to encourage the evidence-based journalism.
Another way of addressing this issue is by fact-checking websites. These websites like politifact.com, snopes.com and factchecking.org aim to debunk false news by manually assess the credibility of claims that have been circulated massively in online platforms. These campaigns were not limited to the English language where other languages such as Arabic have been targeted by some sites like fatabyyano.net.
Introduction ::: Hypothesis
Trusted news is recounting its content in a naturalistic way without attempting to affect the opinion of the reader. On the other hand, false news is taking advantage of the presented issue sensitivity to affect the readers' emotions which sequentially may affect their opinions as well. A set of works has been done previously to investigate the language of false information. The authors in BIBREF3 have studied rumours in Twitter. They have investigated a corpus of true and false tweets rumours from different aspects. From an emotional point of view, they found that false rumours inspired fear, disgust, and surprise in their replies while the true ones inspired joy and anticipation. Some kinds of false information are similar to other language phenomena. For example, satire by its definition showed similarity with irony language. The work in BIBREF4 showed that affective features work well in the detection of irony. In addition, they confirmed that positive words are more relevant for identifying sarcasm and negative words for irony BIBREF5. The results of these works motivate us to investigate the impact of emotions on false news types. These are the research questions we aim to answer:
RQ1 Can emotional features help detecting false information?
RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources?
RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones?
RQ4 What are the top-N emotions that discriminate false information types in both textual sources?
In this work, we investigate suspicious news in two different sources: Twitter and online news articles. Concerning the news articles source, we focus on the beginning part of them, since they are fairly long, and the emotional analysis could be biased by their length. We believe that the beginning part of false news articles can present a unique emotional pattern for each false information type since the writer in this part is normally trying to trigger some emotions in the reader.
Throughout the emotional analysis, we go beyond the superficial analysis of words. We hope that our findings in this work will contribute to fake news detection.
The key contributions of this article are:
Model: We propose an approach that combines emotional information from documents in a deep neural network. We compare the obtained results with a set of baselines. The results show that our approach is promising.
Analysis: We show a comprehensive analysis on two false information datasets collected from social media and online news articles, based on a large set of emotions. We compare the differences from an affective perspective in both sources, and obtain valuable insights on how emotions can contribute to detect false news.
The rest of the paper is structured as follows; After a brief review of related work in Section SECREF2, Section SECREF3 introduces our emotionally-infused model. Then, we present the evaluation framework in Section SECREF4. Section SECREF5 reports the experiments and the results, followed by an analysis on the false information types from emotional perspective in Section SECREF6. Finally, the conclusions of this work are summarized in Section SECREF7.
Related Work
The work that has been done previously on the analysis of false information is rather small regarding the approaches that were proposed. In this section, we present some recent works on the language analysis and detection of false information. Recent attempts tried to analyze the language of false news to give a better understanding. A work done in BIBREF6 has studied the false information in Twitter from a linguistic perspective. The authors found that real tweets contain significantly fewer bias markers, hedges, subjective terms, and less harmful words. They also found that propaganda news targets morals more than satires and hoaxes but less than clickbaits. Furthermore, satirical news contains more loyalty and fewer betrayal morals compared to propaganda. In addition, they built a model that combined a set of features (graph-based, cues words, and syntax) and achieved a good performance comparing to other baselines (71% vs. 59% macro-F1). Another similar work BIBREF2 has been done to characterize the language of false information (propaganda, hoax, and satire) in online news articles. The authors have studied the language from different perspectives: the existence of weak and strong subjectivity, hedges, and the degree of dramatization using a lexicon from Wiktionary. As well, they employed in their study the LIWC dictionary to exploit the existence of personal pronouns, swear, sexual, etc. words. The results showed that false news types tend to use first and second personal pronouns more than truthful news. Moreover, the results showed that false news generally uses words to exaggerate (subjectives, superlatives, and modal adverbs), and specifically, the satire type uses more adverbs. Hoax stories tend to use fewer superlatives and comparatives, and propagandas use relatively more assertive verbs. Moving away from these previous false information types, the work in BIBREF3 has focused on analyzing rumours in Twitter (from factuality perspective: True or False). They analyzed about 126,000 rumours and found that falsehood widespread significantly further, faster, deeper, and more broadly than truth in many domains. In addition, they found that false rumours are more novel than truthful ones, which made people more likely to share them. From an emotional perspective, they found that false rumours triggered "fear", "disgust", and "surprise" in replies while truthful ones triggered "anticipation", "sadness", "joy", and "trust". Another work BIBREF7 has studied the problem of detecting hoaxes by analyzing features related to the content in Wikipedia. The work showed that some features like hoaxes articles' length as well as the ratio of wiki markups (images, references, links to other articles and to external URLs, etc.) are important to discriminate hoaxes from legitimate articles. Many approaches have been proposed on fake news detection. In general, they are divided into social media and news claims-based approaches. The authors in BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 have proposed supervised methods using recurrent neural networks or by extracting manual features like a set of regular expressions, content-based, network-based etc. As an example, the work by BIBREF13 assessed the credibility of tweets by analyzing trending topics. They used message-based, user-based, and propagation-based features, and they found that some features related to the user information like user's age, number of followers, statuse counts etc. have helped the most to discriminate truthful from deceitful tweets. Other news claims-based approaches BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18 have been mainly focusing on inferring the credibility of the claims by retrieving evidences from Google or Bing search engines. These approaches have employed a different set of features starting from manual features (e.g. cosine similarity between the claims and the results, Alexa Rank of the evidence source, etc.) to a fully automatic approach using deep learning networks. A recent trend started to appear and is trying to approach the detection of fake news from a stance perspective. The aim is to predict how other articles orient to a specific fact BIBREF19, BIBREF20, BIBREF21.
Emotionally-infused Model
In this section we describe the Emotionally-Infused Network we propose (EIN).
Emotionally-infused Model ::: Emotional Lexicons
Several emotional models well-grounded in psychology science have been proposed, such as the ones by Magda Arnold BIBREF22, Paul Ekman BIBREF23, Robert Plutchik BIBREF24, and Gerrod Parrot BIBREF25. On the basis of each of them, many emotional resources (lexicons) were built in the literature. In this work, we consider several emotional resources to increase the coverage of the emotional words in texts as well to have a wider range of emotions in the analysis. Concretely, we use EmoSenticNet, EmoLex, SentiSense, LIWC and Empath:
EmoSenticNet BIBREF26 is a lexical resource that assigns WordNet-Affect emotion labels to SenticNet concepts. It has a total of 13,189 entries annotated using the six Ekman's basic emotions.
EmoLex BIBREF27 is a word-emotion association lexicon that is labeled using the eight Plutchik's emotions. This lexicon contains 14,181 words.
SentiSense BIBREF28 is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. SentiSense has 5,496 words labeled with emotions from a set of 14 emotional categories, which is an edited version of the merge between Arnold, Plutchik, and Parrott models.
LIWC BIBREF29 is a linguistic dictionary that contains 4,500 words categorized to analyze psycholinguistic patterns in text. Linguistic Inquiry and Word Count (LIWC) has 4 emotional categories: "sadness", "anger", "positive emotion", and "negative emotion".
Empath BIBREF30 is a tool that uses deep learning and word embeddings to build a semantically meaningful lexicon for concepts. Empath uses Parrott's model for the emotional representation, but we use only the primary emotions (6 emotions) in the Pattrott's hierarchy ("love", "joy", "surprise", "anger", "sadness", "fear").
In our study we consider the 17 emotions that we shown in Figure FIGREF14.
Emotionally-infused Model ::: Model
We choose an Long short-term memory (LSTM) BIBREF31 that takes the sequence of words as input and predicts the false information type. The input of our network is based on word embedding (content-based) and emotional features (see Figure FIGREF24).
Emotionally-infused Model ::: Input Representation
Our network consists of two branches. In the content-based one, we use an embedding layer followed by a LSTM layer. Then, we add an attention layer BIBREF32 to make this branch focus on (highlighting) particular words over others . The attention mechanism assigns a weight to each word vector result from the LSTM layer with a focus on the classification class. The input representation for this branch is represented as follows: the input sentence $S$ of length $n$ is represented as $[S\textsubscript {1}, S\textsubscript {2} .. S\textsubscript {n}]$ where $S\textsubscript {n} \in {\rm I\!R}^d$; ${\rm I\!R}^d$ is a d-dimensional word embedding vector of the $i$-th word in the input sentence. The output vectors of the words are passed to the LSTM layer, where the LSTM learns the hidden state $h\textsubscript {t}$ by capturing the previous timesteps (past features). The produced hidden state $h\textsubscript {t}$ at each time step is passed to the attention layer which computes a "context" vector $c\textsubscript {t}$ as the weighted mean of the state sequence $h$ by:
Where $T$ is the total number of timesteps in the input sequence and $\alpha \textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj. This output vector is then concatenated with the output from the densea (see Figure FIGREF24) layer and passed to the denseb layer, which precedes a final Softmax function to predict the output classes. Since the content-based branch is concatenated with the other emotional-based branch.
On the other hand, the input representation for the emotional-based branch is defined as follows: we have $N$ emotional lexicons $L\textsubscript {n}$ where $n\in [1, 5]$, each lexicon has $M$ number of emotions depending on the emotion model that the lexicon uses (e.g. Plutchik, Arnold, etc.). The emotion vector $E\textsubscript {m}$ of an input document using the $n$-th emotional lexicon is $L\textsubscript {n}E\textsubscript {m}$. In our implementation, the emotional vector $E\textsubscript {m}$ of a Lexicon $L\textsubscript {n}$ is built using word frequency and normalized by the input sentence's length. Each input sentence is represented using:
Where $v \in {\rm I\!R}^q$ and $q$ is:
Evaluation Framework ::: Datasets
Annotated data is a crucial source of information to analyze false information. Current status of previous works lacks available datasets of false information, where the majority of the works focus on annotating datasets from a factuality perspective. However, to analyze the existence of emotions across different sources of news, we rely on two publicly available datasets and a list contains suspicious Twitter accounts.
Evaluation Framework ::: Datasets ::: News Articles
Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. These news articles include satires, hoaxes, and propagandas but not clickbaits. Since we are interested also in analyzing clickbaits, we slice a sample from an available clickbait dataset BIBREF33 that was originally collected from two sources: Wikinews articles' headlines and other online sites that are known to publish clickbaits. The satire, hoax, and propaganda news articles are considerably long (some of them reach the length of 5,000 words). This length could affect the quality of the analysis as we mentioned before. We focus on analyzing the initial part of the article. Our intuition is that it is where emotion-bearing words will be more frequent. Therefore, we shorten long news articles into a maximum length of N words (N=300). We choose the value of N based on the length of the shortest articles. Moreover, we process the dataset by removing very short articles, redundant articles or articles that do not have a textual content.
Evaluation Framework ::: Datasets ::: Twitter
For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. For the real news, we merge this list with another 32 Twitter accounts from BIBREF34. In this work we could not use the previous dataset and we decide to collect tweets again. For each of these accounts, we collected the last M tweets posted (M=1000). By investigating these accounts manually, we found that many tweets just contain links without textual news. Therefore, to ensure of the quality of the crawled data, we chose a high value for M (also to have enough data). After the collecting process, we processed these tweets by removing duplicated, very short tweets, and tweets without textual content. Table TABREF35 shows a summary for both datasets.
Evaluation Framework ::: Baselines
Emotions have been used in many natural language processing tasks and they showed their efficiency BIBREF35. We aim at investigating their efficiency to detect false information. In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN).
For the EIN model, we compare it to different baselines: a) The first one is bag-of-words with a support vector machine classifier (BOW-SVM). We test different classifiers, and we choose SVM since it gives the highest result in the 10-fold Cross Validation (CV); b) We use another baseline that is based on word embeddings where for each input document we extract an average word embedding vector by taking the mean of the embeddings for the document's words. Similarly, we test different classifiers and the Logistic Regression classifier shows the best performance (WE-LR); c) The last baseline is the same as our neural architecture but without the emotional features branch: an LSTM layer followed by attention and dense layers.
Experiments and Results ::: Emotion-based Model
In our experiments, we use $20\%$ of each of the datasets for testing and we apply 10-fold cross-validation on the remain part for selecting the best classifier as well for tuning it. We tested many classifiers and we finally choose Random Forest for both datasets since it obtained the best results. Table TABREF39 presents the classification results on both datasets.
The results in both datasets show that emotional features clearly detect false news, compared to the baselines (RQ1). The emotional features perform better in the news articles dataset compared with these of tweets. We are interested in investigating also how good are the emotional features in detecting each class comparing to the RAN baseline. We choose the RAN baseline since it shows better results with regard to macro-F1 score. For doing so, we investigated the True Positive (TP) classification ratio for each class in each dataset.
The clickbait class shows the highest TPs comparing to the other classes. From this we can infer that clickbaits exploit emotions much more than the other classes to deceive the reader. It is worth to mention that for the hoax class the proposed approach is better than the random baselines with a small ratio ($4\%$ difference). This could be justified by the fact that hoaxes, by definition, try to convince the reader of the credibility of a false story. Hence, the writer tries to deliver the story in a normal way without allowing the reader to fall under suspicion. The number of instances related to the false information classes in the news articles dataset is the same. Therefore, there is not a majority class that the classifier can be biased to. This is not the case in the Twitter dataset. For the Twitter dataset, the dataset is not balanced. Therefore, where the results are biased by the majority class (propaganda). But in general, all the classes' TP ratios are larger than the corresponding ones obtained with RAN baseline. From these results, we can conclude that suspicious news exploits emotions with the aim to mislead the reader. Following, we present the results obtained by the proposed emotionally-infused model.
Experiments and Results ::: Emotionally-Infused Model
In the neural model, to reduce the computational costs, instead of the cross-validation process we take another $20\%$ from the training part as a validation set (other than the $20\%$ that is prepared for testing). For the pretrained word embeddings, we use Google News Word2Vec 300-Embeddings in the neural network as well as in the W2V-LR baseline. For the classical machine learning classifiers for the baselines, we use the Scikit-Learn python library, and for the deep learning network, we use Keras library with Tensorflow as backend. To tune our deep learning network (hyper-parameters), we use the Hyperopt library. And to reduce the effect of overfitting, we use early stopping technique.
In Table TABREF44 we summarize the parameters with respect to each dataset. We have to mention that we use Dropout after the dense layer in the emotional features branch (Dropc) as well as after the attention layer in the other one (Dropd) before the concatenation process. Since it is a multiclass classification process, we use categorical cross-entropy loss function. A summary of the models' parameters is presented in Table TABREF44.
Table TABREF47 summarizes the performance of the proposed model in comparison to those obtained by the baselines. We report Macro- precision, recall, and F1, including also the metric of accuracy; for comparing the models' results we consider the macro of metrics since it shows an averaged result over all the classes. The baselines that we propose clearly show high results, where the LSTM baseline has the best performance in news articles dataset. In Twitter there is a different scenario, the BOW-SVM baseline shows a higher performance with respect to LSTM. We are interested in investigating the reason behind that. Therefore, we checked the coverage ratio of the used embeddings in the Twitter dataset. We have to mention that we excluded stop words during representing the input documents using the pre-trained Google News word embeddings. In the news articles dataset, we found that the coverage ratio of the embeddings is around $94\%$ while in Twitter it is around $70\%$. Therefore, we tuned the word embeddings during the training process to improve the document's representation since we have a larger dataset from Twitter. This process contributed with $1.9\%$ on the final macro-F1 results in Twitter (the result without tuning is $53.51\%$). Even though, the results obtained with the LSTM baseline is still lower than the one obtained with BOW-SVM. This experiment gives us some intuition that the weaker performance on Twitter may be due to the embeddings. Therefore, we tried different embeddings but none of them improved the result. The second baseline (W2V-LR) proved the same issue regarding the embeddings. The W2V-LR macro-F1 result in the news articles dataset is competitive, where it is much lower in Twitter. The usage of LSTM is two folds: in addition to being a good baseline, it shows also how much the emotional features contribute in the emotionally-infused network.
EIN results outperform the baselines with a large margin (around 2% in Twitter and 7% in news articles), especially in the news articles dataset. The margin between EIN and the best baseline is lower in the Twitter dataset. The results also show that combining emotional features clearly boosts the performance. We can figure out the improvement by comparing the results of EIN to LSTM. EIN shows superior results in news articles dataset with regard to the LSTM (79.43%). A similar case appears in the Twitter dataset but with a lower margin (59.70%). The results of EIN in Twitter dataset show that emotional features help the weak coverage of word embeddings to improve the performance as well as to overcome the BOW-SVM baseline.
We observed before that clickbait TP's ratio of the news articles dataset is the highest one, and this result points out that the clickbait class is less difficult to detect specifically from an emotional perspective. Therefore, in order to assess how our model separates false information types, we employ dimensionality reduction using t-distributed Stochastic Neighbor Embedding (T-SNE) technique BIBREF36 to project the document's representation from a high dimensional space to a 2D plane. Thus, we project the embeddings in EIN by extracting them from the outputs of Denseb layer (see Figure FIGREF48). We extract the embeddings twice, once from a random epoch (epoch 10) at the beginning of the training phase and the other at the last epoch.
Our aim from the early epoch projection is to validate what we have noticed: the clickbait class is less difficult to detect with regard to the other classes. As we can notice in the 10-epoch plot, the clickbait class needs few epochs to be separated from the other types, and this supports what we found previously in the manual investigation of the classes' TP ratios. Despite this clear separation, there is still an overlapping with some real-news records. This results points out that emotions in clickbaits play a key role in deceiving the reader. Also, the figure shows that the disinformation classes still need more training epochs for better separation. Real-news records are totally overlapped with the false information classes as well as the false information classes with each other. On the other hand, for the last epoch, clearly, the classes are separated from each other and the more important, from the real news. But generally, there still a small overlapping between satires and hoaxes as well few records from the propaganda class.
Experiments and Results ::: EIN as Clickbaits Detector
From the previous results in Section SECREF37 as well as from what we notice in Figure FIGREF48, EIN obtains a clear separability of the clickbait class. These observations motivate us to investigate EIN as clickbait detector. Concretely, we test EIN on the source of our clickbait instances BIBREF33 in the news articles dataset. As we mentioned previously, this dataset originally was built using two different text sources. For clickbaits, the authors have manually identified a set of online sites that publish many clickbait articles. Whereas for the negative class, they collected headlines from a corpus of Wikinews articles collected in other research work. They took 7,500 samples from each class for the final version of the dataset. The authors also proposed a clickbaits detector model (Stop_Clickbait) that employed a combination of features: sentence structure (sentence length, average length of words, the ratio of the number of stop words to the number of thematic words and the longest separation between the syntactically dependent words), word patterns (presence of cardinal number at the beginning of the sentence, presence of unusual punctuation patterns), clickbait language (presence of hyperbolic words, common clickbait phrases, internet slangs and determiners), and N-grams features (word, Part-Of-Speech, and syntactic n-grams). Using this set of features group, the authors tested different classifiers where SVM showed the state-of-the-art results. They considered Accuracy, Precision, Recall and F1 to compare their approach to a baseline (an online web browser extension for clickbaits detection called Downworthy).
In this experiment, we consider the third baseline (LSTM) to observe the improvement of the emotional features in the EIN model. Different from the previous experiments, this is a binary classification task. Therefore, we use binary cross-entropy as loss function and we change the Softmax layer to a Sigmoid function. The new parameters for both LSTM and EIN models are mentioned in Table TABREF44.
In Table TABREF51 we present the results of the Stop_Clickbait approach, LSTM baseline, and the EIN model. The results show that our baseline outperforms the proposed clickbait detector with a good margin. Furthermore, the results of the EIN are superior to the LSTM and the Stop_Clickbait detector. Considering emotions in the EIN deep learning approach improved the detection of false information. This is due to the fact that in clickbaits emotions are employed to deceive the reader.
Discussion
The results show that the detection of suspicious news in Twitter is harder than detecting them in news articles. Overall, the results of EIN showed that emotional features improve the performance of our model, especially in the case of the news articles dataset. We manually inspected the Twitter dataset and observed that the language of the tweets has differences compared to the news articles one. We found that news in Twitter has many abbreviations (amp, wrt, JFK...etc.), bad words abbreviations (WTF, LMFO...etc.), informal language presentation, and typos. This reduces the coverage ratio of word embeddings. We also noticed that suspicious news in Twitter are more related to sexual issues. To validate our observations, we extracted the mean value of sexual words using a list of sexual terms BIBREF37. The mean value is the average number of times a sexual/bad word appears in a tweet normalized by the length of the tweet. The mean value in Twitter is 0.003 while in news articles is 0.0024. Similarly, suspicious news in Twitter presented more insulting words than in news articles where the mean value in Twitter is 0.0027 and 0.0017 in news articles.
Following, we focus on analyzing false information from an emotional perspective. We are aiming to answer the rest of the questions, RQ2, RQ3, and RQ4.
RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources?
Intuitively, the emotions contribution in the classification process is not the same, where some words could manifest the existence of specific kind of emotions rather than others. To investigate this point, we use Information Gain (IG) in order to identify the importance of emotions in discriminating between real and all the other types of false news (multiclass task) in both Twitter and news articles datasets (see Figure FIGREF54). Before going through the ranking of features importance, we notice that the emotions ranking shapes are very similar in both Twitter and news articles. This states that despite the fact that the language is different, both sources have similar overall emotions distribution. In other words, false news employs a similar emotional pattern in both text sources. Since the news language in Twitter is not presented clearly as in news articles, this observation can help to build a cross-source system that is trained on suspicious news from news articles to detect the corresponding ones in Twitter. Figure FIGREF54 shows also that the emotion "joy" is the most important emotion in both datasets. It also mentions that "despair" and "hate" are almost not used in the classification process. The ranking of the features in both sources is different, where in the news articles dataset the top important emotions are "joy", "anticipation", "fear", and "disgust" respectively. On the other hand, the top ones in Twitter are "joy", "sadness", "fear", and "disgust".
.
RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones?
We measure statically significant differences using the t-test on emotions across real news and false news (binary task) in the both datasets in Figure FIGREF55. These findings provide a deeper understanding of the EIN performance. The results show that "joy", "neg_emo", "ambiguous", "anticipation", "calmness", "disgust", "trust" and "surprise" have significant statistical differences between real and suspicious news in both datasets. Some other emotions such as "despair" and "anger" have no statistical difference in both datasets. It turns out that the results we obtain are generally consistent with the IG results in research question RQ2. We notice in the IG analysis that some emotions have a higher importance in one of the news sources: "sadness", "anger", and "fear" have a higher importance in Twitter than in news articles, and the opposite for "hope". We observe the same findings using the t-test.
.
RQ4 What are the top-N emotions that discriminate false information types in both textual sources?
False information types are different in the way they present the news to the reader. This raises a question: what are the top employed emotions in each type of false information? In Table TABREF57, we present the first three emotions that contribute mostly to the classification process to each type. This can indicate to us what are the emotion types that are used mostly in each type of false information.
Table TABREF57 shows that clickbaits express "surprise" and "negative emotion" at the most. This validates the definition of clickbaits as "attention redirection" by exploiting the reader and convincing him/her that there is an unexpected thing with negative emotion. The result of seeing "fear" in the top features in Twitter is interesting; one of the recent studies is presenting the hypothesis that says: curiosity is the best remedy for fear BIBREF38 based on psychological interpretations. Taking into account the definition of clickbaits as "attention redirection", looking at our results, we can proof this hypothesis. Furthermore, despite the language differences in both datasets, we obtain almost the same results, which emphasize our results. For hoaxes, it is not simple to interpret a specific pattern of emotions in the results. We might justify it by the fact that hoaxes are written to convince the reader of the validity of a story. Therefore, the writer is trying to present the story in a normal way (truthful) similar to a real story. Therefore, the top emotions are not unique to the hoax type. But what we find from the top hoaxes emotions in both datasets is that they are generally different except the emotion "like". Despite the natural narrative way of presenting the story, the analysis shows that the writer still uses "like" to grab reader's attention smoothly. Propaganda type has clearer emotional interpretation considering its definition. We find that propaganda expresses "joy", "fear" and at the same time "calmness" in the news articles. Both "joy" and "fear" are contrary from an emotional polar perspective, where "joy" shows the extreme of the positive emotions and "fear" the extreme negative, and at the same time, "calmness" is present. The emotional shifting between the two extremes is a clear attempt of opinion manipulation from an emotional perspective. We obtain a similar emotion set from Twitter, but instead of "joy" we get "hope". Lastly, satire is defined as a type of parody presented in a typical format of mainstream journalism, but in a similar way to irony and sarcasm phenomena BIBREF39. The results of the analysis show that "disgust" and "positive emotion" are present in both datasets, but we get "negative emotion" in the news articles and "sadness" in Twitter (both are placed in the negative side of emotions). We are interested in investigating the cause of the emotion "disgust" which appeared in the results from both datasets. We conduct a manual analysis on the text of the satire type in both datasets in order to shed some light on the possible causes. We notice that the satire language in the news often employs the emotion "disgust" to give a sense of humor. Figure FIGREF58 shows some examples from the news articles dataset highlighting the words that triggered the emotion "disgust".
Conclusions and Future Work
In this article we have presented an emotionally-infused deep learning network that uses emotional features to identify false information in Twitter and news articles sources. We performed several experiments to investigate the effectiveness of the emotional features in identifying false information. We validated the performance of the model by comparing it to a LSTM network and other baselines. The results on the two datasets showed that clickbaits have a simpler manipulation language where emotions help detecting them. This demonstrates that emotions play a key role in deceiving the reader. Based on this result, we investigated our model performance on a clickbaits dataset and we compared it to the state-of-the-art performance. Our model showed superior results near to 96% F1 value.
Overall results confirmed that emotional features have boosted EIN model performance achieving better results on 3 different datasets (RQ1). These results emphasized the importance of emotional features in the detection of false information. In Twitter, false news content is deliberately sexual oriented and it uses many insulting words. Our analysis showed that emotions can help detecting false information also in Twitter. In the analysis section, we answered a set of questions regarding the emotions distribution in false news. We found that emotions have similar importance distribution in Twitter and news articles regardless of the differences in the used languages (RQ2). The analysis showed that most of the used emotions have statistical significant difference between real and false news (RQ3). Emotions plays a different role in each type of false information in line with its definition (RQ4). We found that clickbaits try to attract the attention of the reader by mainly employing the "surprise" emotion. Propagandas are manipulating the feelings of the readers by using extreme positive and negative emotions, with triggering a sense of "calmness" to confuse the readers and enforcing a feeling of confidence. Satire news instead use the "disgust" emotion to give a sense of humor. To sum up, we can say that the initial part of false news contains more emotions than the rest of document. Our approach exploit this fact for their detection.
To the best of our knowledge, this is the first work that analyzes the impact of emotions in the detection of false information considering both social media and news articles. As a future work, the results of our approach as a clickbaits detector motivate us to develop for a clickbaits detector as a web browser extension. Also, we will study how the emotions flow inside the articles of each kind of false information, which is worthy to be investigated as the results of this work confirmed. | Majority Class baseline (MC) , Random selection baseline (RAN) |
d71937fa5da853f7529f767730547ccfb70e5908 | d71937fa5da853f7529f767730547ccfb70e5908_0 | Q: What datasets did they use?
Text: Introduction
With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections.
False information is categorized into 8 types according to BIBREF1. Some of these types are intentional to deceive where others are not. In this work, we are interested in analyzing 4 main types, i.e. hoaxes, propagandas, clickbaits, and satires. These types can be classified into two main categories - misinformation and disinformation - where misinformation considers false information that is published without the intent to deceive (e.g. satire). Disinformation can be seen as a specific kind of false information with the aim to mislead the reader (e.g. hoax, propaganda, and clickbait). Propagandas are fabricated stories spread to harm the interest of a particular party. Hoaxes are similar to propagandas but the main aim of the writer is not to manipulate the readers' opinions but to convince them of the validity of a paranoia-fueled story BIBREF2. Clickbait is another type of disinformation that refers to the deliberate use of misleading headlines, thumbnails, or stories' snippets to redirect attention (for traffic attention). Satire is the only type of misinformation, where the writer's main purpose is not to mislead the reader, but rather to deliver the story in an ironic way (to entertain or to be sarcastic).
The topic of fake news is gaining attention due to its risky consequences. A vast set of campaigns has been organized to tackle fake news. The owner of Wikipedia encyclopedia created the news site WikiTribune to encourage the evidence-based journalism.
Another way of addressing this issue is by fact-checking websites. These websites like politifact.com, snopes.com and factchecking.org aim to debunk false news by manually assess the credibility of claims that have been circulated massively in online platforms. These campaigns were not limited to the English language where other languages such as Arabic have been targeted by some sites like fatabyyano.net.
Introduction ::: Hypothesis
Trusted news is recounting its content in a naturalistic way without attempting to affect the opinion of the reader. On the other hand, false news is taking advantage of the presented issue sensitivity to affect the readers' emotions which sequentially may affect their opinions as well. A set of works has been done previously to investigate the language of false information. The authors in BIBREF3 have studied rumours in Twitter. They have investigated a corpus of true and false tweets rumours from different aspects. From an emotional point of view, they found that false rumours inspired fear, disgust, and surprise in their replies while the true ones inspired joy and anticipation. Some kinds of false information are similar to other language phenomena. For example, satire by its definition showed similarity with irony language. The work in BIBREF4 showed that affective features work well in the detection of irony. In addition, they confirmed that positive words are more relevant for identifying sarcasm and negative words for irony BIBREF5. The results of these works motivate us to investigate the impact of emotions on false news types. These are the research questions we aim to answer:
RQ1 Can emotional features help detecting false information?
RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources?
RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones?
RQ4 What are the top-N emotions that discriminate false information types in both textual sources?
In this work, we investigate suspicious news in two different sources: Twitter and online news articles. Concerning the news articles source, we focus on the beginning part of them, since they are fairly long, and the emotional analysis could be biased by their length. We believe that the beginning part of false news articles can present a unique emotional pattern for each false information type since the writer in this part is normally trying to trigger some emotions in the reader.
Throughout the emotional analysis, we go beyond the superficial analysis of words. We hope that our findings in this work will contribute to fake news detection.
The key contributions of this article are:
Model: We propose an approach that combines emotional information from documents in a deep neural network. We compare the obtained results with a set of baselines. The results show that our approach is promising.
Analysis: We show a comprehensive analysis on two false information datasets collected from social media and online news articles, based on a large set of emotions. We compare the differences from an affective perspective in both sources, and obtain valuable insights on how emotions can contribute to detect false news.
The rest of the paper is structured as follows; After a brief review of related work in Section SECREF2, Section SECREF3 introduces our emotionally-infused model. Then, we present the evaluation framework in Section SECREF4. Section SECREF5 reports the experiments and the results, followed by an analysis on the false information types from emotional perspective in Section SECREF6. Finally, the conclusions of this work are summarized in Section SECREF7.
Related Work
The work that has been done previously on the analysis of false information is rather small regarding the approaches that were proposed. In this section, we present some recent works on the language analysis and detection of false information. Recent attempts tried to analyze the language of false news to give a better understanding. A work done in BIBREF6 has studied the false information in Twitter from a linguistic perspective. The authors found that real tweets contain significantly fewer bias markers, hedges, subjective terms, and less harmful words. They also found that propaganda news targets morals more than satires and hoaxes but less than clickbaits. Furthermore, satirical news contains more loyalty and fewer betrayal morals compared to propaganda. In addition, they built a model that combined a set of features (graph-based, cues words, and syntax) and achieved a good performance comparing to other baselines (71% vs. 59% macro-F1). Another similar work BIBREF2 has been done to characterize the language of false information (propaganda, hoax, and satire) in online news articles. The authors have studied the language from different perspectives: the existence of weak and strong subjectivity, hedges, and the degree of dramatization using a lexicon from Wiktionary. As well, they employed in their study the LIWC dictionary to exploit the existence of personal pronouns, swear, sexual, etc. words. The results showed that false news types tend to use first and second personal pronouns more than truthful news. Moreover, the results showed that false news generally uses words to exaggerate (subjectives, superlatives, and modal adverbs), and specifically, the satire type uses more adverbs. Hoax stories tend to use fewer superlatives and comparatives, and propagandas use relatively more assertive verbs. Moving away from these previous false information types, the work in BIBREF3 has focused on analyzing rumours in Twitter (from factuality perspective: True or False). They analyzed about 126,000 rumours and found that falsehood widespread significantly further, faster, deeper, and more broadly than truth in many domains. In addition, they found that false rumours are more novel than truthful ones, which made people more likely to share them. From an emotional perspective, they found that false rumours triggered "fear", "disgust", and "surprise" in replies while truthful ones triggered "anticipation", "sadness", "joy", and "trust". Another work BIBREF7 has studied the problem of detecting hoaxes by analyzing features related to the content in Wikipedia. The work showed that some features like hoaxes articles' length as well as the ratio of wiki markups (images, references, links to other articles and to external URLs, etc.) are important to discriminate hoaxes from legitimate articles. Many approaches have been proposed on fake news detection. In general, they are divided into social media and news claims-based approaches. The authors in BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 have proposed supervised methods using recurrent neural networks or by extracting manual features like a set of regular expressions, content-based, network-based etc. As an example, the work by BIBREF13 assessed the credibility of tweets by analyzing trending topics. They used message-based, user-based, and propagation-based features, and they found that some features related to the user information like user's age, number of followers, statuse counts etc. have helped the most to discriminate truthful from deceitful tweets. Other news claims-based approaches BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18 have been mainly focusing on inferring the credibility of the claims by retrieving evidences from Google or Bing search engines. These approaches have employed a different set of features starting from manual features (e.g. cosine similarity between the claims and the results, Alexa Rank of the evidence source, etc.) to a fully automatic approach using deep learning networks. A recent trend started to appear and is trying to approach the detection of fake news from a stance perspective. The aim is to predict how other articles orient to a specific fact BIBREF19, BIBREF20, BIBREF21.
Emotionally-infused Model
In this section we describe the Emotionally-Infused Network we propose (EIN).
Emotionally-infused Model ::: Emotional Lexicons
Several emotional models well-grounded in psychology science have been proposed, such as the ones by Magda Arnold BIBREF22, Paul Ekman BIBREF23, Robert Plutchik BIBREF24, and Gerrod Parrot BIBREF25. On the basis of each of them, many emotional resources (lexicons) were built in the literature. In this work, we consider several emotional resources to increase the coverage of the emotional words in texts as well to have a wider range of emotions in the analysis. Concretely, we use EmoSenticNet, EmoLex, SentiSense, LIWC and Empath:
EmoSenticNet BIBREF26 is a lexical resource that assigns WordNet-Affect emotion labels to SenticNet concepts. It has a total of 13,189 entries annotated using the six Ekman's basic emotions.
EmoLex BIBREF27 is a word-emotion association lexicon that is labeled using the eight Plutchik's emotions. This lexicon contains 14,181 words.
SentiSense BIBREF28 is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. SentiSense has 5,496 words labeled with emotions from a set of 14 emotional categories, which is an edited version of the merge between Arnold, Plutchik, and Parrott models.
LIWC BIBREF29 is a linguistic dictionary that contains 4,500 words categorized to analyze psycholinguistic patterns in text. Linguistic Inquiry and Word Count (LIWC) has 4 emotional categories: "sadness", "anger", "positive emotion", and "negative emotion".
Empath BIBREF30 is a tool that uses deep learning and word embeddings to build a semantically meaningful lexicon for concepts. Empath uses Parrott's model for the emotional representation, but we use only the primary emotions (6 emotions) in the Pattrott's hierarchy ("love", "joy", "surprise", "anger", "sadness", "fear").
In our study we consider the 17 emotions that we shown in Figure FIGREF14.
Emotionally-infused Model ::: Model
We choose an Long short-term memory (LSTM) BIBREF31 that takes the sequence of words as input and predicts the false information type. The input of our network is based on word embedding (content-based) and emotional features (see Figure FIGREF24).
Emotionally-infused Model ::: Input Representation
Our network consists of two branches. In the content-based one, we use an embedding layer followed by a LSTM layer. Then, we add an attention layer BIBREF32 to make this branch focus on (highlighting) particular words over others . The attention mechanism assigns a weight to each word vector result from the LSTM layer with a focus on the classification class. The input representation for this branch is represented as follows: the input sentence $S$ of length $n$ is represented as $[S\textsubscript {1}, S\textsubscript {2} .. S\textsubscript {n}]$ where $S\textsubscript {n} \in {\rm I\!R}^d$; ${\rm I\!R}^d$ is a d-dimensional word embedding vector of the $i$-th word in the input sentence. The output vectors of the words are passed to the LSTM layer, where the LSTM learns the hidden state $h\textsubscript {t}$ by capturing the previous timesteps (past features). The produced hidden state $h\textsubscript {t}$ at each time step is passed to the attention layer which computes a "context" vector $c\textsubscript {t}$ as the weighted mean of the state sequence $h$ by:
Where $T$ is the total number of timesteps in the input sequence and $\alpha \textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj. This output vector is then concatenated with the output from the densea (see Figure FIGREF24) layer and passed to the denseb layer, which precedes a final Softmax function to predict the output classes. Since the content-based branch is concatenated with the other emotional-based branch.
On the other hand, the input representation for the emotional-based branch is defined as follows: we have $N$ emotional lexicons $L\textsubscript {n}$ where $n\in [1, 5]$, each lexicon has $M$ number of emotions depending on the emotion model that the lexicon uses (e.g. Plutchik, Arnold, etc.). The emotion vector $E\textsubscript {m}$ of an input document using the $n$-th emotional lexicon is $L\textsubscript {n}E\textsubscript {m}$. In our implementation, the emotional vector $E\textsubscript {m}$ of a Lexicon $L\textsubscript {n}$ is built using word frequency and normalized by the input sentence's length. Each input sentence is represented using:
Where $v \in {\rm I\!R}^q$ and $q$ is:
Evaluation Framework ::: Datasets
Annotated data is a crucial source of information to analyze false information. Current status of previous works lacks available datasets of false information, where the majority of the works focus on annotating datasets from a factuality perspective. However, to analyze the existence of emotions across different sources of news, we rely on two publicly available datasets and a list contains suspicious Twitter accounts.
Evaluation Framework ::: Datasets ::: News Articles
Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. These news articles include satires, hoaxes, and propagandas but not clickbaits. Since we are interested also in analyzing clickbaits, we slice a sample from an available clickbait dataset BIBREF33 that was originally collected from two sources: Wikinews articles' headlines and other online sites that are known to publish clickbaits. The satire, hoax, and propaganda news articles are considerably long (some of them reach the length of 5,000 words). This length could affect the quality of the analysis as we mentioned before. We focus on analyzing the initial part of the article. Our intuition is that it is where emotion-bearing words will be more frequent. Therefore, we shorten long news articles into a maximum length of N words (N=300). We choose the value of N based on the length of the shortest articles. Moreover, we process the dataset by removing very short articles, redundant articles or articles that do not have a textual content.
Evaluation Framework ::: Datasets ::: Twitter
For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. For the real news, we merge this list with another 32 Twitter accounts from BIBREF34. In this work we could not use the previous dataset and we decide to collect tweets again. For each of these accounts, we collected the last M tweets posted (M=1000). By investigating these accounts manually, we found that many tweets just contain links without textual news. Therefore, to ensure of the quality of the crawled data, we chose a high value for M (also to have enough data). After the collecting process, we processed these tweets by removing duplicated, very short tweets, and tweets without textual content. Table TABREF35 shows a summary for both datasets.
Evaluation Framework ::: Baselines
Emotions have been used in many natural language processing tasks and they showed their efficiency BIBREF35. We aim at investigating their efficiency to detect false information. In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN).
For the EIN model, we compare it to different baselines: a) The first one is bag-of-words with a support vector machine classifier (BOW-SVM). We test different classifiers, and we choose SVM since it gives the highest result in the 10-fold Cross Validation (CV); b) We use another baseline that is based on word embeddings where for each input document we extract an average word embedding vector by taking the mean of the embeddings for the document's words. Similarly, we test different classifiers and the Logistic Regression classifier shows the best performance (WE-LR); c) The last baseline is the same as our neural architecture but without the emotional features branch: an LSTM layer followed by attention and dense layers.
Experiments and Results ::: Emotion-based Model
In our experiments, we use $20\%$ of each of the datasets for testing and we apply 10-fold cross-validation on the remain part for selecting the best classifier as well for tuning it. We tested many classifiers and we finally choose Random Forest for both datasets since it obtained the best results. Table TABREF39 presents the classification results on both datasets.
The results in both datasets show that emotional features clearly detect false news, compared to the baselines (RQ1). The emotional features perform better in the news articles dataset compared with these of tweets. We are interested in investigating also how good are the emotional features in detecting each class comparing to the RAN baseline. We choose the RAN baseline since it shows better results with regard to macro-F1 score. For doing so, we investigated the True Positive (TP) classification ratio for each class in each dataset.
The clickbait class shows the highest TPs comparing to the other classes. From this we can infer that clickbaits exploit emotions much more than the other classes to deceive the reader. It is worth to mention that for the hoax class the proposed approach is better than the random baselines with a small ratio ($4\%$ difference). This could be justified by the fact that hoaxes, by definition, try to convince the reader of the credibility of a false story. Hence, the writer tries to deliver the story in a normal way without allowing the reader to fall under suspicion. The number of instances related to the false information classes in the news articles dataset is the same. Therefore, there is not a majority class that the classifier can be biased to. This is not the case in the Twitter dataset. For the Twitter dataset, the dataset is not balanced. Therefore, where the results are biased by the majority class (propaganda). But in general, all the classes' TP ratios are larger than the corresponding ones obtained with RAN baseline. From these results, we can conclude that suspicious news exploits emotions with the aim to mislead the reader. Following, we present the results obtained by the proposed emotionally-infused model.
Experiments and Results ::: Emotionally-Infused Model
In the neural model, to reduce the computational costs, instead of the cross-validation process we take another $20\%$ from the training part as a validation set (other than the $20\%$ that is prepared for testing). For the pretrained word embeddings, we use Google News Word2Vec 300-Embeddings in the neural network as well as in the W2V-LR baseline. For the classical machine learning classifiers for the baselines, we use the Scikit-Learn python library, and for the deep learning network, we use Keras library with Tensorflow as backend. To tune our deep learning network (hyper-parameters), we use the Hyperopt library. And to reduce the effect of overfitting, we use early stopping technique.
In Table TABREF44 we summarize the parameters with respect to each dataset. We have to mention that we use Dropout after the dense layer in the emotional features branch (Dropc) as well as after the attention layer in the other one (Dropd) before the concatenation process. Since it is a multiclass classification process, we use categorical cross-entropy loss function. A summary of the models' parameters is presented in Table TABREF44.
Table TABREF47 summarizes the performance of the proposed model in comparison to those obtained by the baselines. We report Macro- precision, recall, and F1, including also the metric of accuracy; for comparing the models' results we consider the macro of metrics since it shows an averaged result over all the classes. The baselines that we propose clearly show high results, where the LSTM baseline has the best performance in news articles dataset. In Twitter there is a different scenario, the BOW-SVM baseline shows a higher performance with respect to LSTM. We are interested in investigating the reason behind that. Therefore, we checked the coverage ratio of the used embeddings in the Twitter dataset. We have to mention that we excluded stop words during representing the input documents using the pre-trained Google News word embeddings. In the news articles dataset, we found that the coverage ratio of the embeddings is around $94\%$ while in Twitter it is around $70\%$. Therefore, we tuned the word embeddings during the training process to improve the document's representation since we have a larger dataset from Twitter. This process contributed with $1.9\%$ on the final macro-F1 results in Twitter (the result without tuning is $53.51\%$). Even though, the results obtained with the LSTM baseline is still lower than the one obtained with BOW-SVM. This experiment gives us some intuition that the weaker performance on Twitter may be due to the embeddings. Therefore, we tried different embeddings but none of them improved the result. The second baseline (W2V-LR) proved the same issue regarding the embeddings. The W2V-LR macro-F1 result in the news articles dataset is competitive, where it is much lower in Twitter. The usage of LSTM is two folds: in addition to being a good baseline, it shows also how much the emotional features contribute in the emotionally-infused network.
EIN results outperform the baselines with a large margin (around 2% in Twitter and 7% in news articles), especially in the news articles dataset. The margin between EIN and the best baseline is lower in the Twitter dataset. The results also show that combining emotional features clearly boosts the performance. We can figure out the improvement by comparing the results of EIN to LSTM. EIN shows superior results in news articles dataset with regard to the LSTM (79.43%). A similar case appears in the Twitter dataset but with a lower margin (59.70%). The results of EIN in Twitter dataset show that emotional features help the weak coverage of word embeddings to improve the performance as well as to overcome the BOW-SVM baseline.
We observed before that clickbait TP's ratio of the news articles dataset is the highest one, and this result points out that the clickbait class is less difficult to detect specifically from an emotional perspective. Therefore, in order to assess how our model separates false information types, we employ dimensionality reduction using t-distributed Stochastic Neighbor Embedding (T-SNE) technique BIBREF36 to project the document's representation from a high dimensional space to a 2D plane. Thus, we project the embeddings in EIN by extracting them from the outputs of Denseb layer (see Figure FIGREF48). We extract the embeddings twice, once from a random epoch (epoch 10) at the beginning of the training phase and the other at the last epoch.
Our aim from the early epoch projection is to validate what we have noticed: the clickbait class is less difficult to detect with regard to the other classes. As we can notice in the 10-epoch plot, the clickbait class needs few epochs to be separated from the other types, and this supports what we found previously in the manual investigation of the classes' TP ratios. Despite this clear separation, there is still an overlapping with some real-news records. This results points out that emotions in clickbaits play a key role in deceiving the reader. Also, the figure shows that the disinformation classes still need more training epochs for better separation. Real-news records are totally overlapped with the false information classes as well as the false information classes with each other. On the other hand, for the last epoch, clearly, the classes are separated from each other and the more important, from the real news. But generally, there still a small overlapping between satires and hoaxes as well few records from the propaganda class.
Experiments and Results ::: EIN as Clickbaits Detector
From the previous results in Section SECREF37 as well as from what we notice in Figure FIGREF48, EIN obtains a clear separability of the clickbait class. These observations motivate us to investigate EIN as clickbait detector. Concretely, we test EIN on the source of our clickbait instances BIBREF33 in the news articles dataset. As we mentioned previously, this dataset originally was built using two different text sources. For clickbaits, the authors have manually identified a set of online sites that publish many clickbait articles. Whereas for the negative class, they collected headlines from a corpus of Wikinews articles collected in other research work. They took 7,500 samples from each class for the final version of the dataset. The authors also proposed a clickbaits detector model (Stop_Clickbait) that employed a combination of features: sentence structure (sentence length, average length of words, the ratio of the number of stop words to the number of thematic words and the longest separation between the syntactically dependent words), word patterns (presence of cardinal number at the beginning of the sentence, presence of unusual punctuation patterns), clickbait language (presence of hyperbolic words, common clickbait phrases, internet slangs and determiners), and N-grams features (word, Part-Of-Speech, and syntactic n-grams). Using this set of features group, the authors tested different classifiers where SVM showed the state-of-the-art results. They considered Accuracy, Precision, Recall and F1 to compare their approach to a baseline (an online web browser extension for clickbaits detection called Downworthy).
In this experiment, we consider the third baseline (LSTM) to observe the improvement of the emotional features in the EIN model. Different from the previous experiments, this is a binary classification task. Therefore, we use binary cross-entropy as loss function and we change the Softmax layer to a Sigmoid function. The new parameters for both LSTM and EIN models are mentioned in Table TABREF44.
In Table TABREF51 we present the results of the Stop_Clickbait approach, LSTM baseline, and the EIN model. The results show that our baseline outperforms the proposed clickbait detector with a good margin. Furthermore, the results of the EIN are superior to the LSTM and the Stop_Clickbait detector. Considering emotions in the EIN deep learning approach improved the detection of false information. This is due to the fact that in clickbaits emotions are employed to deceive the reader.
Discussion
The results show that the detection of suspicious news in Twitter is harder than detecting them in news articles. Overall, the results of EIN showed that emotional features improve the performance of our model, especially in the case of the news articles dataset. We manually inspected the Twitter dataset and observed that the language of the tweets has differences compared to the news articles one. We found that news in Twitter has many abbreviations (amp, wrt, JFK...etc.), bad words abbreviations (WTF, LMFO...etc.), informal language presentation, and typos. This reduces the coverage ratio of word embeddings. We also noticed that suspicious news in Twitter are more related to sexual issues. To validate our observations, we extracted the mean value of sexual words using a list of sexual terms BIBREF37. The mean value is the average number of times a sexual/bad word appears in a tweet normalized by the length of the tweet. The mean value in Twitter is 0.003 while in news articles is 0.0024. Similarly, suspicious news in Twitter presented more insulting words than in news articles where the mean value in Twitter is 0.0027 and 0.0017 in news articles.
Following, we focus on analyzing false information from an emotional perspective. We are aiming to answer the rest of the questions, RQ2, RQ3, and RQ4.
RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources?
Intuitively, the emotions contribution in the classification process is not the same, where some words could manifest the existence of specific kind of emotions rather than others. To investigate this point, we use Information Gain (IG) in order to identify the importance of emotions in discriminating between real and all the other types of false news (multiclass task) in both Twitter and news articles datasets (see Figure FIGREF54). Before going through the ranking of features importance, we notice that the emotions ranking shapes are very similar in both Twitter and news articles. This states that despite the fact that the language is different, both sources have similar overall emotions distribution. In other words, false news employs a similar emotional pattern in both text sources. Since the news language in Twitter is not presented clearly as in news articles, this observation can help to build a cross-source system that is trained on suspicious news from news articles to detect the corresponding ones in Twitter. Figure FIGREF54 shows also that the emotion "joy" is the most important emotion in both datasets. It also mentions that "despair" and "hate" are almost not used in the classification process. The ranking of the features in both sources is different, where in the news articles dataset the top important emotions are "joy", "anticipation", "fear", and "disgust" respectively. On the other hand, the top ones in Twitter are "joy", "sadness", "fear", and "disgust".
.
RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones?
We measure statically significant differences using the t-test on emotions across real news and false news (binary task) in the both datasets in Figure FIGREF55. These findings provide a deeper understanding of the EIN performance. The results show that "joy", "neg_emo", "ambiguous", "anticipation", "calmness", "disgust", "trust" and "surprise" have significant statistical differences between real and suspicious news in both datasets. Some other emotions such as "despair" and "anger" have no statistical difference in both datasets. It turns out that the results we obtain are generally consistent with the IG results in research question RQ2. We notice in the IG analysis that some emotions have a higher importance in one of the news sources: "sadness", "anger", and "fear" have a higher importance in Twitter than in news articles, and the opposite for "hope". We observe the same findings using the t-test.
.
RQ4 What are the top-N emotions that discriminate false information types in both textual sources?
False information types are different in the way they present the news to the reader. This raises a question: what are the top employed emotions in each type of false information? In Table TABREF57, we present the first three emotions that contribute mostly to the classification process to each type. This can indicate to us what are the emotion types that are used mostly in each type of false information.
Table TABREF57 shows that clickbaits express "surprise" and "negative emotion" at the most. This validates the definition of clickbaits as "attention redirection" by exploiting the reader and convincing him/her that there is an unexpected thing with negative emotion. The result of seeing "fear" in the top features in Twitter is interesting; one of the recent studies is presenting the hypothesis that says: curiosity is the best remedy for fear BIBREF38 based on psychological interpretations. Taking into account the definition of clickbaits as "attention redirection", looking at our results, we can proof this hypothesis. Furthermore, despite the language differences in both datasets, we obtain almost the same results, which emphasize our results. For hoaxes, it is not simple to interpret a specific pattern of emotions in the results. We might justify it by the fact that hoaxes are written to convince the reader of the validity of a story. Therefore, the writer is trying to present the story in a normal way (truthful) similar to a real story. Therefore, the top emotions are not unique to the hoax type. But what we find from the top hoaxes emotions in both datasets is that they are generally different except the emotion "like". Despite the natural narrative way of presenting the story, the analysis shows that the writer still uses "like" to grab reader's attention smoothly. Propaganda type has clearer emotional interpretation considering its definition. We find that propaganda expresses "joy", "fear" and at the same time "calmness" in the news articles. Both "joy" and "fear" are contrary from an emotional polar perspective, where "joy" shows the extreme of the positive emotions and "fear" the extreme negative, and at the same time, "calmness" is present. The emotional shifting between the two extremes is a clear attempt of opinion manipulation from an emotional perspective. We obtain a similar emotion set from Twitter, but instead of "joy" we get "hope". Lastly, satire is defined as a type of parody presented in a typical format of mainstream journalism, but in a similar way to irony and sarcasm phenomena BIBREF39. The results of the analysis show that "disgust" and "positive emotion" are present in both datasets, but we get "negative emotion" in the news articles and "sadness" in Twitter (both are placed in the negative side of emotions). We are interested in investigating the cause of the emotion "disgust" which appeared in the results from both datasets. We conduct a manual analysis on the text of the satire type in both datasets in order to shed some light on the possible causes. We notice that the satire language in the news often employs the emotion "disgust" to give a sense of humor. Figure FIGREF58 shows some examples from the news articles dataset highlighting the words that triggered the emotion "disgust".
Conclusions and Future Work
In this article we have presented an emotionally-infused deep learning network that uses emotional features to identify false information in Twitter and news articles sources. We performed several experiments to investigate the effectiveness of the emotional features in identifying false information. We validated the performance of the model by comparing it to a LSTM network and other baselines. The results on the two datasets showed that clickbaits have a simpler manipulation language where emotions help detecting them. This demonstrates that emotions play a key role in deceiving the reader. Based on this result, we investigated our model performance on a clickbaits dataset and we compared it to the state-of-the-art performance. Our model showed superior results near to 96% F1 value.
Overall results confirmed that emotional features have boosted EIN model performance achieving better results on 3 different datasets (RQ1). These results emphasized the importance of emotional features in the detection of false information. In Twitter, false news content is deliberately sexual oriented and it uses many insulting words. Our analysis showed that emotions can help detecting false information also in Twitter. In the analysis section, we answered a set of questions regarding the emotions distribution in false news. We found that emotions have similar importance distribution in Twitter and news articles regardless of the differences in the used languages (RQ2). The analysis showed that most of the used emotions have statistical significant difference between real and false news (RQ3). Emotions plays a different role in each type of false information in line with its definition (RQ4). We found that clickbaits try to attract the attention of the reader by mainly employing the "surprise" emotion. Propagandas are manipulating the feelings of the readers by using extreme positive and negative emotions, with triggering a sense of "calmness" to confuse the readers and enforcing a feeling of confidence. Satire news instead use the "disgust" emotion to give a sense of humor. To sum up, we can say that the initial part of false news contains more emotions than the rest of document. Our approach exploit this fact for their detection.
To the best of our knowledge, this is the first work that analyzes the impact of emotions in the detection of false information considering both social media and news articles. As a future work, the results of our approach as a clickbaits detector motivate us to develop for a clickbaits detector as a web browser extension. Also, we will study how the emotions flow inside the articles of each kind of false information, which is worthy to be investigated as the results of this work confirmed. | News Articles, Twitter |
8d258899e36326183899ebc67aeb4188a86f682c | 8d258899e36326183899ebc67aeb4188a86f682c_0 | Q: What scoring function does the model use to score triples?
Text: Introduction
Knowledge bases (KBs), such as WordNet BIBREF0 , YAGO BIBREF1 , Freebase BIBREF2 and DBpedia BIBREF3 , represent relationships between entities as triples $(\mathrm {head\ entity, relation, tail\ entity})$ . Even very large knowledge bases are still far from complete BIBREF4 , BIBREF5 . Link prediction or knowledge base completion systems BIBREF6 predict which triples not in a knowledge base are likely to be true BIBREF7 , BIBREF8 . A variety of different kinds of information is potentially useful here, including information extracted from external corpora BIBREF9 , BIBREF10 and the other relationships that hold between the entities BIBREF11 , BIBREF12 . For example, toutanova-EtAl:2015:EMNLP used information from the external ClueWeb-12 corpus to significantly enhance performance.
While integrating a wide variety of information sources can produce excellent results BIBREF13 , there are several reasons for studying simpler models that directly optimize a score function for the triples in a knowledge base, such as the one presented here. First, additional information sources might not be available, e.g., for knowledge bases for specialized domains. Second, models that don't exploit external resources are simpler and thus typically much faster to train than the more complex models using additional information. Third, the more complex models that exploit external information are typically extensions of these simpler models, and are often initialized with parameters estimated by such simpler models, so improvements to the simpler models should yield corresponding improvements to the more complex models as well.
Embedding models for KB completion associate entities and/or relations with dense feature vectors or matrices. Such models obtain state-of-the-art performance BIBREF14 , BIBREF8 , BIBREF15 , BIBREF16 , BIBREF4 , BIBREF17 , BIBREF18 and generalize to large KBs BIBREF19 . Table 1 summarizes a number of prominent embedding models for KB completion.
Let $(h, r, t)$ represent a triple. In all of the models discussed here, the head entity $h$ and the tail entity $t$ are represented by vectors $\textbf {h}$ and $\textbf {t}\in \mathbb {R}^{k}$ respectively. The Unstructured model BIBREF15 assumes that $\textbf {h} \approx \textbf {t}$ . As the Unstructured model does not take the relationship $r$ into account, it cannot distinguish different relation types. The Structured Embedding (SE) model BIBREF8 extends the unstructured model by assuming that $h$ and $t$ are similar only in a relation-dependent subspace. It represents each relation $r$ with two matrices $h$0 and $h$1 , which are chosen so that $h$2 . The TransE model BIBREF16 is inspired by models such as Word2Vec BIBREF20 where relationships between words often correspond to translations in latent feature space. The TransE model represents each relation $h$3 by a translation vector r $h$4 , which is chosen so that $h$5 .
The primary contribution of this paper is that two very simple relation-prediction models, SE and TransE, can be combined into a single model, which we call STransE. Specifically, we use relation-specific matrices $\textbf {W}_{r,1}$ and $\textbf {W}_{r,2}$ as in the SE model to identify the relation-dependent aspects of both $h$ and $t$ , and use a vector $\textbf {r}$ as in the TransE model to describe the relationship between $h$ and $t$ in this subspace. Specifically, our new KB completion model STransE chooses $\textbf {W}_{r,1}$ , $\textbf {W}_{r,2}$ and $\textbf {r}$ so that $\textbf {W}_{r,2}$0 . That is, a TransE-style relationship holds in some relation-dependent subspace, and crucially, this subspace may involve very different projections of the head $\textbf {W}_{r,2}$1 and tail $\textbf {W}_{r,2}$2 . So $\textbf {W}_{r,2}$3 and $\textbf {W}_{r,2}$4 can highlight, suppress, or even change the sign of, relation-specific attributes of $\textbf {W}_{r,2}$5 and $\textbf {W}_{r,2}$6 . For example, for the “purchases” relationship, certain attributes of individuals $\textbf {W}_{r,2}$7 (e.g., age, gender, marital status) are presumably strongly correlated with very different attributes of objects $\textbf {W}_{r,2}$8 (e.g., sports car, washing machine and the like).
As we show below, STransE performs better than the SE and TransE models and other state-of-the-art link prediction models on two standard link prediction datasets WN18 and FB15k, so it can serve as a new baseline for KB completion. We expect that the STransE will also be able to serve as the basis for extended models that exploit a wider variety of information sources, just as TransE does.
Our approach
Let $\mathcal {E}$ denote the set of entities and $\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \in \mathcal {E}$ and $r \in \mathcal {R}$ , the STransE model defines a score function $f_r(h, t)$ of its implausibility. Our goal is to choose $f$ such that the score $f_r(h,t)$ of a plausible triple $(h,r,t)$ is smaller than the score $f_{r^{\prime }}(h^{\prime },t^{\prime })$ of an implausible triple $\mathcal {R}$0 . We define the STransE score function $\mathcal {R}$1 as follows:
$ f_r(h, t) & = & \Vert \textbf {W}_{r,1}\textbf {h} + \textbf {r} - \textbf {W}_{r,2}\textbf {t}\Vert _{\ell _{1/2}} $
using either the $\ell _1$ or the $\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\ell _1$ norm gave slightly better results). To learn the vectors and matrices we minimize the following margin-based objective function: $ \mathcal {L} & = & \sum _{\begin{array}{c}(h,r,t) \in \mathcal {G} \\ (h^{\prime },r,t^{\prime }) \in \mathcal {G}^{\prime }_{(h, r, t)}\end{array}} [\gamma + f_r(h, t) - f_r(h^{\prime }, t^{\prime })]_+ $
where $[x]_+ = \max (0, x)$ , $\gamma $ is the margin hyper-parameter, $\mathcal {G}$ is the training set consisting of correct triples, and $\mathcal {G}^{\prime }_{(h, r, t)} = \lbrace (h^{\prime }, r, t) \mid h^{\prime } \in \mathcal {E}, (h^{\prime }, r, t) \notin \mathcal {G} \rbrace \cup \lbrace (h, r, t^{\prime }) \mid t^{\prime } \in \mathcal {E}, (h, r, t^{\prime }) \notin \mathcal {G} \rbrace $ is the set of incorrect triples generated by corrupting a correct triple $(h, r, t)\in \mathcal {G}$ .
We use Stochastic Gradient Descent (SGD) to minimize $\mathcal {L}$ , and impose the following constraints during training: $\Vert \textbf {h}\Vert _2 \leqslant 1$ , $\Vert \textbf {r}\Vert _2 \leqslant 1$ , $\Vert \textbf {t}\Vert _2 \leqslant 1$ , $\Vert \textbf {W}_{r,1}\textbf {h}\Vert _2 \leqslant 1$ and $\Vert \textbf {W}_{r,2}\textbf {t}\Vert _2 \leqslant 1$ .
Related work
Table 1 summarizes related embedding models for link prediction and KB completion. The models differ in the score functions $f_r(h, t)$ and the algorithms used to optimize the margin-based objective function, e.g., SGD, AdaGrad BIBREF21 , AdaDelta BIBREF22 and L-BFGS BIBREF23 .
DISTMULT BIBREF24 is based on a Bilinear model BIBREF14 , BIBREF15 , BIBREF25 where each relation is represented by a diagonal rather than a full matrix. The neural tensor network (NTN) model BIBREF4 uses a bilinear tensor operator to represent each relation while ProjE BIBREF26 could be viewed as a simplified version of NTN with diagonal matrices. Similar quadratic forms are used to model entities and relations in KG2E BIBREF27 , ComplEx BIBREF28 , TATEC BIBREF29 and RSTE BIBREF30 . In addition, HolE BIBREF31 uses circular correlation—a compositional operator—which could be interpreted as a compression of the tensor product.
The TransH model BIBREF17 associates each relation with a relation-specific hyperplane and uses a projection vector to project entity vectors onto that hyperplane. TransD BIBREF32 and TransR/CTransR BIBREF33 extend the TransH model using two projection vectors and a matrix to project entity vectors into a relation-specific space, respectively. TransD learns a relation-role specific mapping just as STransE, but represents this mapping by projection vectors rather than full matrices, as in STransE. The lppTransD model BIBREF34 extends TransD to additionally use two projection vectors for representing each relation. In fact, our STransE model and TranSparse BIBREF35 can be viewed as direct extensions of the TransR model, where head and tail entities are associated with their own projection matrices, rather than using the same matrix for both, as in TransR and CTransR.
Recently, several authors have shown that relation paths between entities in KBs provide richer information and improve the relationship prediction BIBREF36 , BIBREF37 , BIBREF18 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , BIBREF44 . In addition, NickelMTG15 reviews other approaches for learning from KBs and multi-relational data.
Experiments
For link prediction evaluation, we conduct experiments and compare the performance of our STransE model with published results on the benchmark WN18 and FB15k datasets BIBREF16 . Information about these datasets is given in Table 2 .
Task and evaluation protocol
The link prediction task BIBREF8 , BIBREF15 , BIBREF16 predicts the head or tail entity given the relation type and the other entity, i.e. predicting $h$ given $(?, r, t)$ or predicting $t$ given $(h, r, ?)$ where $?$ denotes the missing element. The results are evaluated using the ranking induced by the score function $f_r(h,t)$ on test triples.
For each test triple $(h, r, t)$ , we corrupted it by replacing either $h$ or $t$ by each of the possible entities in turn, and then rank these candidates in ascending order of their implausibility value computed by the score function. This is called as the “Raw” setting protocol. For the “Filtered” setting protocol described in BIBREF16 , we removed any corrupted triples that appear in the knowledge base, to avoid cases where a correct corrupted triple might be ranked higher than the test triple. The “Filtered” setting thus provides a clearer view on the ranking performance. Following BIBREF16 , we report the mean rank and the Hits@10 (i.e., the proportion of test triples in which the target entity was ranked in the top 10 predictions) for each model. In addition, we report the mean reciprocal rank, which is commonly used in information retrieval. In both “Raw” and “Filtered” settings, lower mean rank, higher mean reciprocal rank or higher Hits@10 indicates better link prediction performance.
Following TransR BIBREF33 , TransD BIBREF32 , rTransE BIBREF37 , PTransE BIBREF36 , TATEC BIBREF29 and TranSparse BIBREF35 , we used the entity and relation vectors produced by TransE BIBREF16 to initialize the entity and relation vectors in STransE, and we initialized the relation matrices with identity matrices. We applied the “Bernoulli” trick used also in previous work for generating head or tail entities when sampling incorrect triples BIBREF17 , BIBREF33 , BIBREF27 , BIBREF32 , BIBREF36 , BIBREF34 , BIBREF35 . We ran SGD for 2,000 epochs to estimate the model parameters. Following NIPS20135071 we used a grid search on validation set to choose either the $l_1$ or $l_2$ norm in the score function $f$ , as well as to set the SGD learning rate $\lambda \in \lbrace 0.0001, 0.0005, 0.001, 0.005, 0.01 \rbrace $ , the margin hyper-parameter $\gamma \in \lbrace 1, 3, 5 \rbrace $ and the vector size $k\in \lbrace 50, 100 \rbrace $ . The lowest filtered mean rank on the validation set was obtained when using the $l_1$ norm in $f$ on both WN18 and FB15k, and when $\lambda = 0.0005, \gamma = 5, \text{ and } k = 50$ for WN18, and $\lambda = 0.0001, \gamma = 1, \text{ and } k = 100$ for FB15k.
Main results
Table 3 compares the link prediction results of our STransE model with results reported in prior work, using the same experimental setup. The first 15 rows report the performance of the models that do not exploit information about alternative paths between head and tail entities. The next 5 rows report results of the models that exploit information about relation paths. The last 3 rows present results for the models which make use of textual mentions derived from a large external corpus.
It is clear that the models with the additional external corpus information obtained best results. In future work we plan to extend the STransE model to incorporate such additional information. Table 3 also shows that the models employing path information generally achieve better results than models that do not use such information. In terms of models not exploiting path information or external information, the STransE model produces the highest filtered mean rank on WN18 and the highest filtered Hits@10 and mean reciprocal rank on FB15k. Compared to the closely related models SE, TransE, TransR, CTransR, TransD and TranSparse, our STransE model does better than these models on both WN18 and FB15k.
Following NIPS20135071, Table 4 analyzes Hits@10 results on FB15k with respect to the relation categories defined as follows: for each relation type $r$ , we computed the averaged number $a_h$ of heads $h$ for a pair $(r, t)$ and the averaged number $a_t$ of tails $t$ for a pair $(h, r)$ . If $a_h < 1.5$ and $a_t < 1.5$ , then $r$ is labeled 1-1. If $a_h$0 and $a_h$1 , then $a_h$2 is labeled M-1. If $a_h$3 and $a_h$4 , then $a_h$5 is labeled as 1-M. If $a_h$6 and $a_h$7 , then $a_h$8 is labeled as M-M. 1.4%, 8.9%, 14.6% and 75.1% of the test triples belong to a relation type classified as 1-1, 1-M, M-1 and M-M, respectively.
Table 4 shows that in comparison to prior models not using path information, STransE obtains the second highest Hits@10 result for M-M relation category at $(80.1\% + 83.1\%) / 2 = 81.6\%$ which is 0.5% smaller than the Hits@10 result of TranSparse for M-M. However, STransE obtains 2.5% higher Hits@10 result than TranSparse for M-1. In addition, STransE also performs better than TransD for 1-M and M-1 relation categories. We believe the improved performance of the STransE model is due to its use of full matrices, rather than just projection vectors as in TransD. This permits STransE to model diverse and complex relation categories (such as 1-M, M-1 and especially M-M) better than TransD and other similiar models. However, STransE is not as good as TransD for the 1-1 relations. Perhaps the extra parameters in STransE hurt performance in this case (note that 1-1 relations are relatively rare, so STransE does better overall).
Conclusion and future work
This paper presented a new embedding model for link prediction and KB completion. Our STransE combines insights from several simpler embedding models, specifically the Structured Embedding model BIBREF8 and the TransE model BIBREF16 , by using a low-dimensional vector and two projection matrices to represent each relation. STransE, while being conceptually simple, produces highly competitive results on standard link prediction evaluations, and scores better than the embedding-based models it builds on. Thus it is a suitable candidate for serving as future baseline for more complex models in the link prediction task.
In future work we plan to extend STransE to exploit relation path information in knowledge bases, in a manner similar to lin-EtAl:2015:EMNLP1, guu-miller-liang:2015:EMNLP or NguyenCoNLL2016.
Acknowledgments
This research was supported by a Google award through the Natural Language Understanding Focused Program, and under the Australian Research Council's Discovery Projects funding scheme (project number DP160102156).
NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. The first author is supported by an International Postgraduate Research Scholarship and a NICTA NRPA Top-Up Scholarship. | $ f_r(h, t) & = & \Vert \textbf {W}_{r,1}\textbf {h} + \textbf {r} - \textbf {W}_{r,2}\textbf {t}\Vert _{\ell _{1/2}} $ |
955ca31999309685c1daa5cb03867971ca99ec52 | 955ca31999309685c1daa5cb03867971ca99ec52_0 | Q: What datasets are used to evaluate the model?
Text: Introduction
Knowledge bases (KBs), such as WordNet BIBREF0 , YAGO BIBREF1 , Freebase BIBREF2 and DBpedia BIBREF3 , represent relationships between entities as triples $(\mathrm {head\ entity, relation, tail\ entity})$ . Even very large knowledge bases are still far from complete BIBREF4 , BIBREF5 . Link prediction or knowledge base completion systems BIBREF6 predict which triples not in a knowledge base are likely to be true BIBREF7 , BIBREF8 . A variety of different kinds of information is potentially useful here, including information extracted from external corpora BIBREF9 , BIBREF10 and the other relationships that hold between the entities BIBREF11 , BIBREF12 . For example, toutanova-EtAl:2015:EMNLP used information from the external ClueWeb-12 corpus to significantly enhance performance.
While integrating a wide variety of information sources can produce excellent results BIBREF13 , there are several reasons for studying simpler models that directly optimize a score function for the triples in a knowledge base, such as the one presented here. First, additional information sources might not be available, e.g., for knowledge bases for specialized domains. Second, models that don't exploit external resources are simpler and thus typically much faster to train than the more complex models using additional information. Third, the more complex models that exploit external information are typically extensions of these simpler models, and are often initialized with parameters estimated by such simpler models, so improvements to the simpler models should yield corresponding improvements to the more complex models as well.
Embedding models for KB completion associate entities and/or relations with dense feature vectors or matrices. Such models obtain state-of-the-art performance BIBREF14 , BIBREF8 , BIBREF15 , BIBREF16 , BIBREF4 , BIBREF17 , BIBREF18 and generalize to large KBs BIBREF19 . Table 1 summarizes a number of prominent embedding models for KB completion.
Let $(h, r, t)$ represent a triple. In all of the models discussed here, the head entity $h$ and the tail entity $t$ are represented by vectors $\textbf {h}$ and $\textbf {t}\in \mathbb {R}^{k}$ respectively. The Unstructured model BIBREF15 assumes that $\textbf {h} \approx \textbf {t}$ . As the Unstructured model does not take the relationship $r$ into account, it cannot distinguish different relation types. The Structured Embedding (SE) model BIBREF8 extends the unstructured model by assuming that $h$ and $t$ are similar only in a relation-dependent subspace. It represents each relation $r$ with two matrices $h$0 and $h$1 , which are chosen so that $h$2 . The TransE model BIBREF16 is inspired by models such as Word2Vec BIBREF20 where relationships between words often correspond to translations in latent feature space. The TransE model represents each relation $h$3 by a translation vector r $h$4 , which is chosen so that $h$5 .
The primary contribution of this paper is that two very simple relation-prediction models, SE and TransE, can be combined into a single model, which we call STransE. Specifically, we use relation-specific matrices $\textbf {W}_{r,1}$ and $\textbf {W}_{r,2}$ as in the SE model to identify the relation-dependent aspects of both $h$ and $t$ , and use a vector $\textbf {r}$ as in the TransE model to describe the relationship between $h$ and $t$ in this subspace. Specifically, our new KB completion model STransE chooses $\textbf {W}_{r,1}$ , $\textbf {W}_{r,2}$ and $\textbf {r}$ so that $\textbf {W}_{r,2}$0 . That is, a TransE-style relationship holds in some relation-dependent subspace, and crucially, this subspace may involve very different projections of the head $\textbf {W}_{r,2}$1 and tail $\textbf {W}_{r,2}$2 . So $\textbf {W}_{r,2}$3 and $\textbf {W}_{r,2}$4 can highlight, suppress, or even change the sign of, relation-specific attributes of $\textbf {W}_{r,2}$5 and $\textbf {W}_{r,2}$6 . For example, for the “purchases” relationship, certain attributes of individuals $\textbf {W}_{r,2}$7 (e.g., age, gender, marital status) are presumably strongly correlated with very different attributes of objects $\textbf {W}_{r,2}$8 (e.g., sports car, washing machine and the like).
As we show below, STransE performs better than the SE and TransE models and other state-of-the-art link prediction models on two standard link prediction datasets WN18 and FB15k, so it can serve as a new baseline for KB completion. We expect that the STransE will also be able to serve as the basis for extended models that exploit a wider variety of information sources, just as TransE does.
Our approach
Let $\mathcal {E}$ denote the set of entities and $\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \in \mathcal {E}$ and $r \in \mathcal {R}$ , the STransE model defines a score function $f_r(h, t)$ of its implausibility. Our goal is to choose $f$ such that the score $f_r(h,t)$ of a plausible triple $(h,r,t)$ is smaller than the score $f_{r^{\prime }}(h^{\prime },t^{\prime })$ of an implausible triple $\mathcal {R}$0 . We define the STransE score function $\mathcal {R}$1 as follows:
$ f_r(h, t) & = & \Vert \textbf {W}_{r,1}\textbf {h} + \textbf {r} - \textbf {W}_{r,2}\textbf {t}\Vert _{\ell _{1/2}} $
using either the $\ell _1$ or the $\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\ell _1$ norm gave slightly better results). To learn the vectors and matrices we minimize the following margin-based objective function: $ \mathcal {L} & = & \sum _{\begin{array}{c}(h,r,t) \in \mathcal {G} \\ (h^{\prime },r,t^{\prime }) \in \mathcal {G}^{\prime }_{(h, r, t)}\end{array}} [\gamma + f_r(h, t) - f_r(h^{\prime }, t^{\prime })]_+ $
where $[x]_+ = \max (0, x)$ , $\gamma $ is the margin hyper-parameter, $\mathcal {G}$ is the training set consisting of correct triples, and $\mathcal {G}^{\prime }_{(h, r, t)} = \lbrace (h^{\prime }, r, t) \mid h^{\prime } \in \mathcal {E}, (h^{\prime }, r, t) \notin \mathcal {G} \rbrace \cup \lbrace (h, r, t^{\prime }) \mid t^{\prime } \in \mathcal {E}, (h, r, t^{\prime }) \notin \mathcal {G} \rbrace $ is the set of incorrect triples generated by corrupting a correct triple $(h, r, t)\in \mathcal {G}$ .
We use Stochastic Gradient Descent (SGD) to minimize $\mathcal {L}$ , and impose the following constraints during training: $\Vert \textbf {h}\Vert _2 \leqslant 1$ , $\Vert \textbf {r}\Vert _2 \leqslant 1$ , $\Vert \textbf {t}\Vert _2 \leqslant 1$ , $\Vert \textbf {W}_{r,1}\textbf {h}\Vert _2 \leqslant 1$ and $\Vert \textbf {W}_{r,2}\textbf {t}\Vert _2 \leqslant 1$ .
Related work
Table 1 summarizes related embedding models for link prediction and KB completion. The models differ in the score functions $f_r(h, t)$ and the algorithms used to optimize the margin-based objective function, e.g., SGD, AdaGrad BIBREF21 , AdaDelta BIBREF22 and L-BFGS BIBREF23 .
DISTMULT BIBREF24 is based on a Bilinear model BIBREF14 , BIBREF15 , BIBREF25 where each relation is represented by a diagonal rather than a full matrix. The neural tensor network (NTN) model BIBREF4 uses a bilinear tensor operator to represent each relation while ProjE BIBREF26 could be viewed as a simplified version of NTN with diagonal matrices. Similar quadratic forms are used to model entities and relations in KG2E BIBREF27 , ComplEx BIBREF28 , TATEC BIBREF29 and RSTE BIBREF30 . In addition, HolE BIBREF31 uses circular correlation—a compositional operator—which could be interpreted as a compression of the tensor product.
The TransH model BIBREF17 associates each relation with a relation-specific hyperplane and uses a projection vector to project entity vectors onto that hyperplane. TransD BIBREF32 and TransR/CTransR BIBREF33 extend the TransH model using two projection vectors and a matrix to project entity vectors into a relation-specific space, respectively. TransD learns a relation-role specific mapping just as STransE, but represents this mapping by projection vectors rather than full matrices, as in STransE. The lppTransD model BIBREF34 extends TransD to additionally use two projection vectors for representing each relation. In fact, our STransE model and TranSparse BIBREF35 can be viewed as direct extensions of the TransR model, where head and tail entities are associated with their own projection matrices, rather than using the same matrix for both, as in TransR and CTransR.
Recently, several authors have shown that relation paths between entities in KBs provide richer information and improve the relationship prediction BIBREF36 , BIBREF37 , BIBREF18 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , BIBREF44 . In addition, NickelMTG15 reviews other approaches for learning from KBs and multi-relational data.
Experiments
For link prediction evaluation, we conduct experiments and compare the performance of our STransE model with published results on the benchmark WN18 and FB15k datasets BIBREF16 . Information about these datasets is given in Table 2 .
Task and evaluation protocol
The link prediction task BIBREF8 , BIBREF15 , BIBREF16 predicts the head or tail entity given the relation type and the other entity, i.e. predicting $h$ given $(?, r, t)$ or predicting $t$ given $(h, r, ?)$ where $?$ denotes the missing element. The results are evaluated using the ranking induced by the score function $f_r(h,t)$ on test triples.
For each test triple $(h, r, t)$ , we corrupted it by replacing either $h$ or $t$ by each of the possible entities in turn, and then rank these candidates in ascending order of their implausibility value computed by the score function. This is called as the “Raw” setting protocol. For the “Filtered” setting protocol described in BIBREF16 , we removed any corrupted triples that appear in the knowledge base, to avoid cases where a correct corrupted triple might be ranked higher than the test triple. The “Filtered” setting thus provides a clearer view on the ranking performance. Following BIBREF16 , we report the mean rank and the Hits@10 (i.e., the proportion of test triples in which the target entity was ranked in the top 10 predictions) for each model. In addition, we report the mean reciprocal rank, which is commonly used in information retrieval. In both “Raw” and “Filtered” settings, lower mean rank, higher mean reciprocal rank or higher Hits@10 indicates better link prediction performance.
Following TransR BIBREF33 , TransD BIBREF32 , rTransE BIBREF37 , PTransE BIBREF36 , TATEC BIBREF29 and TranSparse BIBREF35 , we used the entity and relation vectors produced by TransE BIBREF16 to initialize the entity and relation vectors in STransE, and we initialized the relation matrices with identity matrices. We applied the “Bernoulli” trick used also in previous work for generating head or tail entities when sampling incorrect triples BIBREF17 , BIBREF33 , BIBREF27 , BIBREF32 , BIBREF36 , BIBREF34 , BIBREF35 . We ran SGD for 2,000 epochs to estimate the model parameters. Following NIPS20135071 we used a grid search on validation set to choose either the $l_1$ or $l_2$ norm in the score function $f$ , as well as to set the SGD learning rate $\lambda \in \lbrace 0.0001, 0.0005, 0.001, 0.005, 0.01 \rbrace $ , the margin hyper-parameter $\gamma \in \lbrace 1, 3, 5 \rbrace $ and the vector size $k\in \lbrace 50, 100 \rbrace $ . The lowest filtered mean rank on the validation set was obtained when using the $l_1$ norm in $f$ on both WN18 and FB15k, and when $\lambda = 0.0005, \gamma = 5, \text{ and } k = 50$ for WN18, and $\lambda = 0.0001, \gamma = 1, \text{ and } k = 100$ for FB15k.
Main results
Table 3 compares the link prediction results of our STransE model with results reported in prior work, using the same experimental setup. The first 15 rows report the performance of the models that do not exploit information about alternative paths between head and tail entities. The next 5 rows report results of the models that exploit information about relation paths. The last 3 rows present results for the models which make use of textual mentions derived from a large external corpus.
It is clear that the models with the additional external corpus information obtained best results. In future work we plan to extend the STransE model to incorporate such additional information. Table 3 also shows that the models employing path information generally achieve better results than models that do not use such information. In terms of models not exploiting path information or external information, the STransE model produces the highest filtered mean rank on WN18 and the highest filtered Hits@10 and mean reciprocal rank on FB15k. Compared to the closely related models SE, TransE, TransR, CTransR, TransD and TranSparse, our STransE model does better than these models on both WN18 and FB15k.
Following NIPS20135071, Table 4 analyzes Hits@10 results on FB15k with respect to the relation categories defined as follows: for each relation type $r$ , we computed the averaged number $a_h$ of heads $h$ for a pair $(r, t)$ and the averaged number $a_t$ of tails $t$ for a pair $(h, r)$ . If $a_h < 1.5$ and $a_t < 1.5$ , then $r$ is labeled 1-1. If $a_h$0 and $a_h$1 , then $a_h$2 is labeled M-1. If $a_h$3 and $a_h$4 , then $a_h$5 is labeled as 1-M. If $a_h$6 and $a_h$7 , then $a_h$8 is labeled as M-M. 1.4%, 8.9%, 14.6% and 75.1% of the test triples belong to a relation type classified as 1-1, 1-M, M-1 and M-M, respectively.
Table 4 shows that in comparison to prior models not using path information, STransE obtains the second highest Hits@10 result for M-M relation category at $(80.1\% + 83.1\%) / 2 = 81.6\%$ which is 0.5% smaller than the Hits@10 result of TranSparse for M-M. However, STransE obtains 2.5% higher Hits@10 result than TranSparse for M-1. In addition, STransE also performs better than TransD for 1-M and M-1 relation categories. We believe the improved performance of the STransE model is due to its use of full matrices, rather than just projection vectors as in TransD. This permits STransE to model diverse and complex relation categories (such as 1-M, M-1 and especially M-M) better than TransD and other similiar models. However, STransE is not as good as TransD for the 1-1 relations. Perhaps the extra parameters in STransE hurt performance in this case (note that 1-1 relations are relatively rare, so STransE does better overall).
Conclusion and future work
This paper presented a new embedding model for link prediction and KB completion. Our STransE combines insights from several simpler embedding models, specifically the Structured Embedding model BIBREF8 and the TransE model BIBREF16 , by using a low-dimensional vector and two projection matrices to represent each relation. STransE, while being conceptually simple, produces highly competitive results on standard link prediction evaluations, and scores better than the embedding-based models it builds on. Thus it is a suitable candidate for serving as future baseline for more complex models in the link prediction task.
In future work we plan to extend STransE to exploit relation path information in knowledge bases, in a manner similar to lin-EtAl:2015:EMNLP1, guu-miller-liang:2015:EMNLP or NguyenCoNLL2016.
Acknowledgments
This research was supported by a Google award through the Natural Language Understanding Focused Program, and under the Australian Research Council's Discovery Projects funding scheme (project number DP160102156).
NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. The first author is supported by an International Postgraduate Research Scholarship and a NICTA NRPA Top-Up Scholarship. | WN18, FB15k |
9b2b063e8a9938da195c9c0d6caa3e37a4a615a8 | 9b2b063e8a9938da195c9c0d6caa3e37a4a615a8_0 | Q: How long it took for each Doc2Vec model to be trained?
Text: Abstract
Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method.
Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra.
Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm.
Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm.
Background ::: PubMed
PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1.
Background ::: The pmra model
To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed.
Background ::: Documents embedding
Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document.
Background ::: Related Work
Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities.
Methods ::: Material
During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus.
Methods ::: Optimisation
Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector.
A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM.
Methods ::: Training
The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS.
Methods ::: Evaluation
The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.
Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity).
Methods ::: Evaluation ::: String length
To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).
Methods ::: Evaluation ::: Words co-occurrences
A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.
Methods ::: Evaluation ::: Stems co-occurrences
The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.
Methods ::: Evaluation ::: MeSH similarity
It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V.
Methods ::: Evaluation ::: Manual evaluation
Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation.
Results ::: Optimisation
Regarding the optimisation, 1,920 different models were trained and evaluated. First, the dm parameter highly affects the accuracy. Indeed, the PV-DBOW architecture looks more precise with a highest accuracy of 25.78%, while the PV-DM reached only 18.08% of common MeSH terms in average between query and top-close documents. Then, embedding vectors having large number of dimensions ($> 256$) seem to lead to a better accuracy, for PV-DBOW at least. Finally, when set too low ($< 0.01$), the alpha parameter leads to poor accuracy. The best combination of parameters, obtained thanks to the PV-DBOW architecture, was selected. The best parameters regarding the PV-DM, but having the same vector_size value, were also kept (13.30% of accuracy). The concatenation of models is thus possible without dimensions reduction, this method being promoted by Mikolov and Lee BIBREF3. Selected values are listed on the table TABREF16.
Results ::: Evaluation ::: String length
By looking at the length difference in term of characters between documents brought closer by D2V, a difference is visible between the two architectures (Figure FIGREF19C). In fact, while a very low correlation is visible under the PV-DM architecture (coefficient $-2.6e10^{-5}$) and under the pmra model ($-5.4e10^{-5}$), a stronger negative one is observed between the cosine distance computed by the PV-DBOW for two documents and their difference in terms of length (coefficient $-1.1e10^{-4}$). This correlation suggests that two documents having a similar size are more likely to be closer in the vectorial space created by the PV-DBOW (cosine distance closer to 1).
Results ::: Evaluation ::: Words co-occurrences
Once scores from pmra have been normalized, the correlation between words co-occurrences and scores returned by both D2V and pmra were studied (Figure FIGREF19B). The very low slopes of the D2V trend lines ($-1.1e10^{-5}$ for the PV-DBOW and $-3e10^{-6}$ for PV-DM) indicate that the vocabulary content does not influence (positively or negatively) the proximity between two documents for this algorithm. By looking at the green dots or line, the pmra seems to give less importance to the co-occurrence of terms. A low slope is observed ($-5.8e10^{-5}$), indicating a slight negative correlation between word co-occurrence and computed score.
Results ::: Evaluation ::: Stems co-occurrences
This test assigns a score reflecting the proximity between two documents regarding their vocabulary content, the impact of the conjugation, plural forms, etc was lowered by a stemming step. The D2V model returns a cosine score S for a pair of documents ($0 < S < 1$, the top-close document is not likely to have a negative cosine value), while the pmra returns a score between 18M and 75M in our case BIBREF0. These scores were normalized to fit between the same limits than the cosine distance. For PV-DBOW, PV-DM and pmra, the influence of the stems is almost insignificant with very flat slopes looking at the trend lines ($1e10^{-6}$, $-2e10^{-6}$ and $-2e10^{-6}$ respectively, see figure FIGREF19A). This indicates that the stem content of two documents will not affect (negatively or positively) their proximity for these models.
Results ::: Evaluation ::: MeSH similarity
By studying the common MeSH labels between two close documents, it is possible to assess whether the context influence or not this proximity. By looking at the figure FIGREF23A, we can see that PV-DBOW and pmra are very close in term of MeSH score, indicating that they bring closer documents sharing a similar number of common MeSH labels in average. The pmra model seems to be more likely to output documents sharing a higher MeSH score (the distribution tail going further 4 with a mean equal to 1.58, standard deviation: 1.06), while the PV-DM brings closer documents that are less likely to share an important number of MeSH terms, with a majority of score between 0 and 1 (mean equal to 1.16, standard deviation: 0.73). The figure FIGREF23B shows the correlation between the MeSH score for documents returned by the pmra and those returned by both PV-DM and PV-DBOW models. The PV-DBOW algorithm looks way closer to the pmra in terms of common MeSH labels between two close documents with a slope of 1.0064. The PV-DM model is much less correlated, with a slope of 0.1633, indicating less MeSH in common for close articles.
Results ::: Evaluation ::: Manual evaluation
Regarding the results obtained by both PV-DBOW and PV-DM sub-architectures, the PV-DBOW model has been used versus the pmra. Its close score in the MeSH evaluation task compared to the pmra's one indicates an ability to bring closer documents sharing same concepts. Thus, 10 randomly chosen documents were sent to the pmra and to the PV-DBOW models and they were asked to output the 10 closest documents for each. Their relevance was then assessed by four evaluators.
The agreement between all evaluators regarding the three-modalities scale was assessed by computing the Cohen's kappa score $K$ thanks to the SKlearn Python's library (Figure FIGREF25) BIBREF16. First, we can notice that the highest $K$ was obtained by the two medical data librarian (EL and GK) with $K=0.61$, indicating a substantial agreement BIBREF17. In contrary, the lowest $K$ was computed using evaluations from the two Medical Doctors (SJD and JPL) with $K=0.49$, indicating barely a moderate agreement. The average agreement is represented by $K=0.55$, indicating a moderate global agreement.
Regarding the ranking of all results (the first being the most accurate compared to the query, the last the worst one), the agreement can also be seen as moderate. The concordance rate has been defined between two evaluators for a given pair of results $A/B$ as the probability for A to be better ranked than B for both judges. For each couple of evaluators the mean agreement was computed by averaging ten pairs $result/query$ randomly selected. In order to evaluate the 95% bilateral confidence interval associated with the average concordance rate of each pair of judges the Student confidence interval estimation method has been used. Deviation from normal has been reduced by hyperbolic arc-tangent transformation. The global mean concordance by pooling all judges together was 0.751 (sd = 0.08). The minimal concordance was equal to 0.73 and the maximal one to 0.88.
Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD).
Discussion
In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed.
Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’).
Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results.
D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title.
Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted.
This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist.
As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work.
To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms.
Conclusion
This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation. | Unanswerable |
ac3c88ace59bf75788370062db139f60499c2056 | ac3c88ace59bf75788370062db139f60499c2056_0 | Q: How better are results for pmra algorithm than Doc2Vec in human evaluation?
Text: Abstract
Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method.
Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra.
Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm.
Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm.
Background ::: PubMed
PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1.
Background ::: The pmra model
To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed.
Background ::: Documents embedding
Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document.
Background ::: Related Work
Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities.
Methods ::: Material
During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus.
Methods ::: Optimisation
Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector.
A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM.
Methods ::: Training
The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS.
Methods ::: Evaluation
The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.
Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity).
Methods ::: Evaluation ::: String length
To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).
Methods ::: Evaluation ::: Words co-occurrences
A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.
Methods ::: Evaluation ::: Stems co-occurrences
The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.
Methods ::: Evaluation ::: MeSH similarity
It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V.
Methods ::: Evaluation ::: Manual evaluation
Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation.
Results ::: Optimisation
Regarding the optimisation, 1,920 different models were trained and evaluated. First, the dm parameter highly affects the accuracy. Indeed, the PV-DBOW architecture looks more precise with a highest accuracy of 25.78%, while the PV-DM reached only 18.08% of common MeSH terms in average between query and top-close documents. Then, embedding vectors having large number of dimensions ($> 256$) seem to lead to a better accuracy, for PV-DBOW at least. Finally, when set too low ($< 0.01$), the alpha parameter leads to poor accuracy. The best combination of parameters, obtained thanks to the PV-DBOW architecture, was selected. The best parameters regarding the PV-DM, but having the same vector_size value, were also kept (13.30% of accuracy). The concatenation of models is thus possible without dimensions reduction, this method being promoted by Mikolov and Lee BIBREF3. Selected values are listed on the table TABREF16.
Results ::: Evaluation ::: String length
By looking at the length difference in term of characters between documents brought closer by D2V, a difference is visible between the two architectures (Figure FIGREF19C). In fact, while a very low correlation is visible under the PV-DM architecture (coefficient $-2.6e10^{-5}$) and under the pmra model ($-5.4e10^{-5}$), a stronger negative one is observed between the cosine distance computed by the PV-DBOW for two documents and their difference in terms of length (coefficient $-1.1e10^{-4}$). This correlation suggests that two documents having a similar size are more likely to be closer in the vectorial space created by the PV-DBOW (cosine distance closer to 1).
Results ::: Evaluation ::: Words co-occurrences
Once scores from pmra have been normalized, the correlation between words co-occurrences and scores returned by both D2V and pmra were studied (Figure FIGREF19B). The very low slopes of the D2V trend lines ($-1.1e10^{-5}$ for the PV-DBOW and $-3e10^{-6}$ for PV-DM) indicate that the vocabulary content does not influence (positively or negatively) the proximity between two documents for this algorithm. By looking at the green dots or line, the pmra seems to give less importance to the co-occurrence of terms. A low slope is observed ($-5.8e10^{-5}$), indicating a slight negative correlation between word co-occurrence and computed score.
Results ::: Evaluation ::: Stems co-occurrences
This test assigns a score reflecting the proximity between two documents regarding their vocabulary content, the impact of the conjugation, plural forms, etc was lowered by a stemming step. The D2V model returns a cosine score S for a pair of documents ($0 < S < 1$, the top-close document is not likely to have a negative cosine value), while the pmra returns a score between 18M and 75M in our case BIBREF0. These scores were normalized to fit between the same limits than the cosine distance. For PV-DBOW, PV-DM and pmra, the influence of the stems is almost insignificant with very flat slopes looking at the trend lines ($1e10^{-6}$, $-2e10^{-6}$ and $-2e10^{-6}$ respectively, see figure FIGREF19A). This indicates that the stem content of two documents will not affect (negatively or positively) their proximity for these models.
Results ::: Evaluation ::: MeSH similarity
By studying the common MeSH labels between two close documents, it is possible to assess whether the context influence or not this proximity. By looking at the figure FIGREF23A, we can see that PV-DBOW and pmra are very close in term of MeSH score, indicating that they bring closer documents sharing a similar number of common MeSH labels in average. The pmra model seems to be more likely to output documents sharing a higher MeSH score (the distribution tail going further 4 with a mean equal to 1.58, standard deviation: 1.06), while the PV-DM brings closer documents that are less likely to share an important number of MeSH terms, with a majority of score between 0 and 1 (mean equal to 1.16, standard deviation: 0.73). The figure FIGREF23B shows the correlation between the MeSH score for documents returned by the pmra and those returned by both PV-DM and PV-DBOW models. The PV-DBOW algorithm looks way closer to the pmra in terms of common MeSH labels between two close documents with a slope of 1.0064. The PV-DM model is much less correlated, with a slope of 0.1633, indicating less MeSH in common for close articles.
Results ::: Evaluation ::: Manual evaluation
Regarding the results obtained by both PV-DBOW and PV-DM sub-architectures, the PV-DBOW model has been used versus the pmra. Its close score in the MeSH evaluation task compared to the pmra's one indicates an ability to bring closer documents sharing same concepts. Thus, 10 randomly chosen documents were sent to the pmra and to the PV-DBOW models and they were asked to output the 10 closest documents for each. Their relevance was then assessed by four evaluators.
The agreement between all evaluators regarding the three-modalities scale was assessed by computing the Cohen's kappa score $K$ thanks to the SKlearn Python's library (Figure FIGREF25) BIBREF16. First, we can notice that the highest $K$ was obtained by the two medical data librarian (EL and GK) with $K=0.61$, indicating a substantial agreement BIBREF17. In contrary, the lowest $K$ was computed using evaluations from the two Medical Doctors (SJD and JPL) with $K=0.49$, indicating barely a moderate agreement. The average agreement is represented by $K=0.55$, indicating a moderate global agreement.
Regarding the ranking of all results (the first being the most accurate compared to the query, the last the worst one), the agreement can also be seen as moderate. The concordance rate has been defined between two evaluators for a given pair of results $A/B$ as the probability for A to be better ranked than B for both judges. For each couple of evaluators the mean agreement was computed by averaging ten pairs $result/query$ randomly selected. In order to evaluate the 95% bilateral confidence interval associated with the average concordance rate of each pair of judges the Student confidence interval estimation method has been used. Deviation from normal has been reduced by hyperbolic arc-tangent transformation. The global mean concordance by pooling all judges together was 0.751 (sd = 0.08). The minimal concordance was equal to 0.73 and the maximal one to 0.88.
Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD).
Discussion
In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed.
Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’).
Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results.
D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title.
Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted.
This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist.
As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work.
To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms.
Conclusion
This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation. | The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents. |
26012f57cba21ba44b9a9f7ed8b1ed9e8ee7625d | 26012f57cba21ba44b9a9f7ed8b1ed9e8ee7625d_0 | Q: What Doc2Vec architectures other than PV-DBOW have been tried?
Text: Abstract
Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method.
Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra.
Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm.
Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm.
Background ::: PubMed
PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1.
Background ::: The pmra model
To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed.
Background ::: Documents embedding
Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document.
Background ::: Related Work
Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities.
Methods ::: Material
During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus.
Methods ::: Optimisation
Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector.
A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM.
Methods ::: Training
The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS.
Methods ::: Evaluation
The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.
Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity).
Methods ::: Evaluation ::: String length
To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).
Methods ::: Evaluation ::: Words co-occurrences
A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.
Methods ::: Evaluation ::: Stems co-occurrences
The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.
Methods ::: Evaluation ::: MeSH similarity
It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V.
Methods ::: Evaluation ::: Manual evaluation
Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation.
Results ::: Optimisation
Regarding the optimisation, 1,920 different models were trained and evaluated. First, the dm parameter highly affects the accuracy. Indeed, the PV-DBOW architecture looks more precise with a highest accuracy of 25.78%, while the PV-DM reached only 18.08% of common MeSH terms in average between query and top-close documents. Then, embedding vectors having large number of dimensions ($> 256$) seem to lead to a better accuracy, for PV-DBOW at least. Finally, when set too low ($< 0.01$), the alpha parameter leads to poor accuracy. The best combination of parameters, obtained thanks to the PV-DBOW architecture, was selected. The best parameters regarding the PV-DM, but having the same vector_size value, were also kept (13.30% of accuracy). The concatenation of models is thus possible without dimensions reduction, this method being promoted by Mikolov and Lee BIBREF3. Selected values are listed on the table TABREF16.
Results ::: Evaluation ::: String length
By looking at the length difference in term of characters between documents brought closer by D2V, a difference is visible between the two architectures (Figure FIGREF19C). In fact, while a very low correlation is visible under the PV-DM architecture (coefficient $-2.6e10^{-5}$) and under the pmra model ($-5.4e10^{-5}$), a stronger negative one is observed between the cosine distance computed by the PV-DBOW for two documents and their difference in terms of length (coefficient $-1.1e10^{-4}$). This correlation suggests that two documents having a similar size are more likely to be closer in the vectorial space created by the PV-DBOW (cosine distance closer to 1).
Results ::: Evaluation ::: Words co-occurrences
Once scores from pmra have been normalized, the correlation between words co-occurrences and scores returned by both D2V and pmra were studied (Figure FIGREF19B). The very low slopes of the D2V trend lines ($-1.1e10^{-5}$ for the PV-DBOW and $-3e10^{-6}$ for PV-DM) indicate that the vocabulary content does not influence (positively or negatively) the proximity between two documents for this algorithm. By looking at the green dots or line, the pmra seems to give less importance to the co-occurrence of terms. A low slope is observed ($-5.8e10^{-5}$), indicating a slight negative correlation between word co-occurrence and computed score.
Results ::: Evaluation ::: Stems co-occurrences
This test assigns a score reflecting the proximity between two documents regarding their vocabulary content, the impact of the conjugation, plural forms, etc was lowered by a stemming step. The D2V model returns a cosine score S for a pair of documents ($0 < S < 1$, the top-close document is not likely to have a negative cosine value), while the pmra returns a score between 18M and 75M in our case BIBREF0. These scores were normalized to fit between the same limits than the cosine distance. For PV-DBOW, PV-DM and pmra, the influence of the stems is almost insignificant with very flat slopes looking at the trend lines ($1e10^{-6}$, $-2e10^{-6}$ and $-2e10^{-6}$ respectively, see figure FIGREF19A). This indicates that the stem content of two documents will not affect (negatively or positively) their proximity for these models.
Results ::: Evaluation ::: MeSH similarity
By studying the common MeSH labels between two close documents, it is possible to assess whether the context influence or not this proximity. By looking at the figure FIGREF23A, we can see that PV-DBOW and pmra are very close in term of MeSH score, indicating that they bring closer documents sharing a similar number of common MeSH labels in average. The pmra model seems to be more likely to output documents sharing a higher MeSH score (the distribution tail going further 4 with a mean equal to 1.58, standard deviation: 1.06), while the PV-DM brings closer documents that are less likely to share an important number of MeSH terms, with a majority of score between 0 and 1 (mean equal to 1.16, standard deviation: 0.73). The figure FIGREF23B shows the correlation between the MeSH score for documents returned by the pmra and those returned by both PV-DM and PV-DBOW models. The PV-DBOW algorithm looks way closer to the pmra in terms of common MeSH labels between two close documents with a slope of 1.0064. The PV-DM model is much less correlated, with a slope of 0.1633, indicating less MeSH in common for close articles.
Results ::: Evaluation ::: Manual evaluation
Regarding the results obtained by both PV-DBOW and PV-DM sub-architectures, the PV-DBOW model has been used versus the pmra. Its close score in the MeSH evaluation task compared to the pmra's one indicates an ability to bring closer documents sharing same concepts. Thus, 10 randomly chosen documents were sent to the pmra and to the PV-DBOW models and they were asked to output the 10 closest documents for each. Their relevance was then assessed by four evaluators.
The agreement between all evaluators regarding the three-modalities scale was assessed by computing the Cohen's kappa score $K$ thanks to the SKlearn Python's library (Figure FIGREF25) BIBREF16. First, we can notice that the highest $K$ was obtained by the two medical data librarian (EL and GK) with $K=0.61$, indicating a substantial agreement BIBREF17. In contrary, the lowest $K$ was computed using evaluations from the two Medical Doctors (SJD and JPL) with $K=0.49$, indicating barely a moderate agreement. The average agreement is represented by $K=0.55$, indicating a moderate global agreement.
Regarding the ranking of all results (the first being the most accurate compared to the query, the last the worst one), the agreement can also be seen as moderate. The concordance rate has been defined between two evaluators for a given pair of results $A/B$ as the probability for A to be better ranked than B for both judges. For each couple of evaluators the mean agreement was computed by averaging ten pairs $result/query$ randomly selected. In order to evaluate the 95% bilateral confidence interval associated with the average concordance rate of each pair of judges the Student confidence interval estimation method has been used. Deviation from normal has been reduced by hyperbolic arc-tangent transformation. The global mean concordance by pooling all judges together was 0.751 (sd = 0.08). The minimal concordance was equal to 0.73 and the maximal one to 0.88.
Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD).
Discussion
In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed.
Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’).
Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results.
D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title.
Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted.
This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist.
As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work.
To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms.
Conclusion
This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation. | PV-DM |
bd26a6d5d8b68d62e1b6eaf974796f3c34a839c4 | bd26a6d5d8b68d62e1b6eaf974796f3c34a839c4_0 | Q: What four evaluation tasks are defined to determine what influences proximity?
Text: Abstract
Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method.
Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra.
Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm.
Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm.
Background ::: PubMed
PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1.
Background ::: The pmra model
To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed.
Background ::: Documents embedding
Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document.
Background ::: Related Work
Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities.
Methods ::: Material
During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus.
Methods ::: Optimisation
Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector.
A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM.
Methods ::: Training
The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS.
Methods ::: Evaluation
The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.
Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity).
Methods ::: Evaluation ::: String length
To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).
Methods ::: Evaluation ::: Words co-occurrences
A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.
Methods ::: Evaluation ::: Stems co-occurrences
The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.
Methods ::: Evaluation ::: MeSH similarity
It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V.
Methods ::: Evaluation ::: Manual evaluation
Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation.
Results ::: Optimisation
Regarding the optimisation, 1,920 different models were trained and evaluated. First, the dm parameter highly affects the accuracy. Indeed, the PV-DBOW architecture looks more precise with a highest accuracy of 25.78%, while the PV-DM reached only 18.08% of common MeSH terms in average between query and top-close documents. Then, embedding vectors having large number of dimensions ($> 256$) seem to lead to a better accuracy, for PV-DBOW at least. Finally, when set too low ($< 0.01$), the alpha parameter leads to poor accuracy. The best combination of parameters, obtained thanks to the PV-DBOW architecture, was selected. The best parameters regarding the PV-DM, but having the same vector_size value, were also kept (13.30% of accuracy). The concatenation of models is thus possible without dimensions reduction, this method being promoted by Mikolov and Lee BIBREF3. Selected values are listed on the table TABREF16.
Results ::: Evaluation ::: String length
By looking at the length difference in term of characters between documents brought closer by D2V, a difference is visible between the two architectures (Figure FIGREF19C). In fact, while a very low correlation is visible under the PV-DM architecture (coefficient $-2.6e10^{-5}$) and under the pmra model ($-5.4e10^{-5}$), a stronger negative one is observed between the cosine distance computed by the PV-DBOW for two documents and their difference in terms of length (coefficient $-1.1e10^{-4}$). This correlation suggests that two documents having a similar size are more likely to be closer in the vectorial space created by the PV-DBOW (cosine distance closer to 1).
Results ::: Evaluation ::: Words co-occurrences
Once scores from pmra have been normalized, the correlation between words co-occurrences and scores returned by both D2V and pmra were studied (Figure FIGREF19B). The very low slopes of the D2V trend lines ($-1.1e10^{-5}$ for the PV-DBOW and $-3e10^{-6}$ for PV-DM) indicate that the vocabulary content does not influence (positively or negatively) the proximity between two documents for this algorithm. By looking at the green dots or line, the pmra seems to give less importance to the co-occurrence of terms. A low slope is observed ($-5.8e10^{-5}$), indicating a slight negative correlation between word co-occurrence and computed score.
Results ::: Evaluation ::: Stems co-occurrences
This test assigns a score reflecting the proximity between two documents regarding their vocabulary content, the impact of the conjugation, plural forms, etc was lowered by a stemming step. The D2V model returns a cosine score S for a pair of documents ($0 < S < 1$, the top-close document is not likely to have a negative cosine value), while the pmra returns a score between 18M and 75M in our case BIBREF0. These scores were normalized to fit between the same limits than the cosine distance. For PV-DBOW, PV-DM and pmra, the influence of the stems is almost insignificant with very flat slopes looking at the trend lines ($1e10^{-6}$, $-2e10^{-6}$ and $-2e10^{-6}$ respectively, see figure FIGREF19A). This indicates that the stem content of two documents will not affect (negatively or positively) their proximity for these models.
Results ::: Evaluation ::: MeSH similarity
By studying the common MeSH labels between two close documents, it is possible to assess whether the context influence or not this proximity. By looking at the figure FIGREF23A, we can see that PV-DBOW and pmra are very close in term of MeSH score, indicating that they bring closer documents sharing a similar number of common MeSH labels in average. The pmra model seems to be more likely to output documents sharing a higher MeSH score (the distribution tail going further 4 with a mean equal to 1.58, standard deviation: 1.06), while the PV-DM brings closer documents that are less likely to share an important number of MeSH terms, with a majority of score between 0 and 1 (mean equal to 1.16, standard deviation: 0.73). The figure FIGREF23B shows the correlation between the MeSH score for documents returned by the pmra and those returned by both PV-DM and PV-DBOW models. The PV-DBOW algorithm looks way closer to the pmra in terms of common MeSH labels between two close documents with a slope of 1.0064. The PV-DM model is much less correlated, with a slope of 0.1633, indicating less MeSH in common for close articles.
Results ::: Evaluation ::: Manual evaluation
Regarding the results obtained by both PV-DBOW and PV-DM sub-architectures, the PV-DBOW model has been used versus the pmra. Its close score in the MeSH evaluation task compared to the pmra's one indicates an ability to bring closer documents sharing same concepts. Thus, 10 randomly chosen documents were sent to the pmra and to the PV-DBOW models and they were asked to output the 10 closest documents for each. Their relevance was then assessed by four evaluators.
The agreement between all evaluators regarding the three-modalities scale was assessed by computing the Cohen's kappa score $K$ thanks to the SKlearn Python's library (Figure FIGREF25) BIBREF16. First, we can notice that the highest $K$ was obtained by the two medical data librarian (EL and GK) with $K=0.61$, indicating a substantial agreement BIBREF17. In contrary, the lowest $K$ was computed using evaluations from the two Medical Doctors (SJD and JPL) with $K=0.49$, indicating barely a moderate agreement. The average agreement is represented by $K=0.55$, indicating a moderate global agreement.
Regarding the ranking of all results (the first being the most accurate compared to the query, the last the worst one), the agreement can also be seen as moderate. The concordance rate has been defined between two evaluators for a given pair of results $A/B$ as the probability for A to be better ranked than B for both judges. For each couple of evaluators the mean agreement was computed by averaging ten pairs $result/query$ randomly selected. In order to evaluate the 95% bilateral confidence interval associated with the average concordance rate of each pair of judges the Student confidence interval estimation method has been used. Deviation from normal has been reduced by hyperbolic arc-tangent transformation. The global mean concordance by pooling all judges together was 0.751 (sd = 0.08). The minimal concordance was equal to 0.73 and the maximal one to 0.88.
Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD).
Discussion
In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed.
Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’).
Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results.
D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title.
Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted.
This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist.
As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work.
To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms.
Conclusion
This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation. | String length, Words co-occurrences, Stems co-occurrences, MeSH similarity |
7d4fad6367f28c67ad22487094489680c45f5062 | 7d4fad6367f28c67ad22487094489680c45f5062_0 | Q: What six parameters were optimized with grid search?
Text: Abstract
Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method.
Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra.
Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm.
Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm.
Background ::: PubMed
PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1.
Background ::: The pmra model
To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed.
Background ::: Documents embedding
Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document.
Background ::: Related Work
Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities.
Methods ::: Material
During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus.
Methods ::: Optimisation
Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector.
A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM.
Methods ::: Training
The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS.
Methods ::: Evaluation
The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.
Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity).
Methods ::: Evaluation ::: String length
To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).
Methods ::: Evaluation ::: Words co-occurrences
A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.
Methods ::: Evaluation ::: Stems co-occurrences
The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.
Methods ::: Evaluation ::: MeSH similarity
It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V.
Methods ::: Evaluation ::: Manual evaluation
Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation.
Results ::: Optimisation
Regarding the optimisation, 1,920 different models were trained and evaluated. First, the dm parameter highly affects the accuracy. Indeed, the PV-DBOW architecture looks more precise with a highest accuracy of 25.78%, while the PV-DM reached only 18.08% of common MeSH terms in average between query and top-close documents. Then, embedding vectors having large number of dimensions ($> 256$) seem to lead to a better accuracy, for PV-DBOW at least. Finally, when set too low ($< 0.01$), the alpha parameter leads to poor accuracy. The best combination of parameters, obtained thanks to the PV-DBOW architecture, was selected. The best parameters regarding the PV-DM, but having the same vector_size value, were also kept (13.30% of accuracy). The concatenation of models is thus possible without dimensions reduction, this method being promoted by Mikolov and Lee BIBREF3. Selected values are listed on the table TABREF16.
Results ::: Evaluation ::: String length
By looking at the length difference in term of characters between documents brought closer by D2V, a difference is visible between the two architectures (Figure FIGREF19C). In fact, while a very low correlation is visible under the PV-DM architecture (coefficient $-2.6e10^{-5}$) and under the pmra model ($-5.4e10^{-5}$), a stronger negative one is observed between the cosine distance computed by the PV-DBOW for two documents and their difference in terms of length (coefficient $-1.1e10^{-4}$). This correlation suggests that two documents having a similar size are more likely to be closer in the vectorial space created by the PV-DBOW (cosine distance closer to 1).
Results ::: Evaluation ::: Words co-occurrences
Once scores from pmra have been normalized, the correlation between words co-occurrences and scores returned by both D2V and pmra were studied (Figure FIGREF19B). The very low slopes of the D2V trend lines ($-1.1e10^{-5}$ for the PV-DBOW and $-3e10^{-6}$ for PV-DM) indicate that the vocabulary content does not influence (positively or negatively) the proximity between two documents for this algorithm. By looking at the green dots or line, the pmra seems to give less importance to the co-occurrence of terms. A low slope is observed ($-5.8e10^{-5}$), indicating a slight negative correlation between word co-occurrence and computed score.
Results ::: Evaluation ::: Stems co-occurrences
This test assigns a score reflecting the proximity between two documents regarding their vocabulary content, the impact of the conjugation, plural forms, etc was lowered by a stemming step. The D2V model returns a cosine score S for a pair of documents ($0 < S < 1$, the top-close document is not likely to have a negative cosine value), while the pmra returns a score between 18M and 75M in our case BIBREF0. These scores were normalized to fit between the same limits than the cosine distance. For PV-DBOW, PV-DM and pmra, the influence of the stems is almost insignificant with very flat slopes looking at the trend lines ($1e10^{-6}$, $-2e10^{-6}$ and $-2e10^{-6}$ respectively, see figure FIGREF19A). This indicates that the stem content of two documents will not affect (negatively or positively) their proximity for these models.
Results ::: Evaluation ::: MeSH similarity
By studying the common MeSH labels between two close documents, it is possible to assess whether the context influence or not this proximity. By looking at the figure FIGREF23A, we can see that PV-DBOW and pmra are very close in term of MeSH score, indicating that they bring closer documents sharing a similar number of common MeSH labels in average. The pmra model seems to be more likely to output documents sharing a higher MeSH score (the distribution tail going further 4 with a mean equal to 1.58, standard deviation: 1.06), while the PV-DM brings closer documents that are less likely to share an important number of MeSH terms, with a majority of score between 0 and 1 (mean equal to 1.16, standard deviation: 0.73). The figure FIGREF23B shows the correlation between the MeSH score for documents returned by the pmra and those returned by both PV-DM and PV-DBOW models. The PV-DBOW algorithm looks way closer to the pmra in terms of common MeSH labels between two close documents with a slope of 1.0064. The PV-DM model is much less correlated, with a slope of 0.1633, indicating less MeSH in common for close articles.
Results ::: Evaluation ::: Manual evaluation
Regarding the results obtained by both PV-DBOW and PV-DM sub-architectures, the PV-DBOW model has been used versus the pmra. Its close score in the MeSH evaluation task compared to the pmra's one indicates an ability to bring closer documents sharing same concepts. Thus, 10 randomly chosen documents were sent to the pmra and to the PV-DBOW models and they were asked to output the 10 closest documents for each. Their relevance was then assessed by four evaluators.
The agreement between all evaluators regarding the three-modalities scale was assessed by computing the Cohen's kappa score $K$ thanks to the SKlearn Python's library (Figure FIGREF25) BIBREF16. First, we can notice that the highest $K$ was obtained by the two medical data librarian (EL and GK) with $K=0.61$, indicating a substantial agreement BIBREF17. In contrary, the lowest $K$ was computed using evaluations from the two Medical Doctors (SJD and JPL) with $K=0.49$, indicating barely a moderate agreement. The average agreement is represented by $K=0.55$, indicating a moderate global agreement.
Regarding the ranking of all results (the first being the most accurate compared to the query, the last the worst one), the agreement can also be seen as moderate. The concordance rate has been defined between two evaluators for a given pair of results $A/B$ as the probability for A to be better ranked than B for both judges. For each couple of evaluators the mean agreement was computed by averaging ten pairs $result/query$ randomly selected. In order to evaluate the 95% bilateral confidence interval associated with the average concordance rate of each pair of judges the Student confidence interval estimation method has been used. Deviation from normal has been reduced by hyperbolic arc-tangent transformation. The global mean concordance by pooling all judges together was 0.751 (sd = 0.08). The minimal concordance was equal to 0.73 and the maximal one to 0.88.
Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD).
Discussion
In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed.
Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’).
Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results.
D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title.
Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted.
This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist.
As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work.
To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms.
Conclusion
This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation. | window_size, alpha, sample, dm, hs, vector_size |
3aa7173612995223a904cc0f8eef4ff203cbb860 | 3aa7173612995223a904cc0f8eef4ff203cbb860_0 | Q: What baseline models do they compare against?
Text: paragraph 1
Content: Task Definition
1. Describe the task of commonsense reading comprehension(CRC) belongs to which filed and how important it is.
2. Define the task of CRC
3. Data feature of CRC
4. Figure 1 shows an example.
Machine Reading Comprehension (MRC) is an extremely challenging topic in natural language processing field. It requires a system to answer the question referring to a given passage.no matter whether the answer is mentioned in the passage. MRC consists of several sub-tasks, such as cloze-style reading comprehension, span-extraction reading comprehension, and open-domain reading comprehension. Most of existing datasets emphasize the question whose answer is mentioned in the passage since it does not need any commonsense. In real reading comprehension, the human reader can fully understand the passage with the prior knowledge to answer the question. To directly relate commonsense knowledge to reading comprehension, SemEval2018 Task 11 defines a new sub-task called Commonsense Reading Comprehension, aiming at answering the questions that requires both commonsense knowledge and the understanding of the passage. The challenge of this task is how tolies in answer questions requires a system to draw inferences from multiple sentences from the passage and requireswith the commonsense knowledge that does not appear in the passage explicitly. Table 1 shows an example of CRC.
paragraph 2
Content: Previous Research
1. Category the methods in SemEval2018 task 11
2. Describe the first method
3. Describe the second method
4. State that your work is belong to which method
Most studies on CRC task are neural network based (NN-based) models, which typically have the following characteristics. Firstly, word representations are augmented by additional lexical information. , such as pre-trained embedding, POS and NER embedding, Relation embedding and some other handcraft features. Secondly, the interaction process is usually implemented by the attention mechanism, which can provide the interaction representations like choice-aware passage, choice-aware question, and question-aware passage. Thirdly, the original representations and interaction representations are fused together and then aggregated by a Bidirectional Long Short-Term Memory Network (BiLSTM) BIBREF1 to get high-order semantic information. Fourthly, the final output based on their bilinear interactions. is the sum scores of passage to choice and question to choice.
The NN-based models have shown powerfulness on this task. However, there are still some limitations. Firstly, the two fusion processes of passage and question to choice are implemented separately, until producing the final output. Secondly, the existing fusion method used in reading comprehension task is usually implemented by concatenation BIBREF2 , BIBREF3 , which is monotonous and cannot capture the partial comparison information between two parts. Studies on Natural Language Inference (NLI) have explored more functions BIBREF4 , BIBREF5 , such as element-wise subtraction and element-wise multiplication, to capture more comparison information, which have been proved to be effective.
In this paper, we introduce a Muti-Perspective Fusion Network (MPFN) to tackle these limitations. The model can fuse the choice with passage and question simultaneously to get a multi-perspective fusion representation. Furthermore, inspired by the element-wise subtraction and element-wise multiplication function used in BIBREF5 , we define three kinds of fusion functions from multiple perspectives to fuse choice, choice-aware passage, and choice-aware question. The three fusions are union fusion, difference fusion, and similarity fusion. Note that, we name the concatenation fusion method as union fusion in this paper, which collects the global information. The difference fusion and the similarity fusion can discover the different parts and similar parts among choice, choice-aware passage, and choice-aware question respectively.
MPFN comprises an encoding layer, a context fusion layer, and an output layer. In the encoding layer, we employ a BiLSTM as the encoder to obtain context representations. to convert the embeddings of passage, question, and choice to their corresponding context embeddings. To acquire better semantic representations, we apply union fusion in the word level. to choice, choice-aware passage embedding, and choice-aware question embedding. In the context fusion layer, we apply union fusion, difference fusion, and similarity fusion to obtain a multi-perspective fusion representation. In the output layer, a self-attention and a feed-forward neural network are used to make the final prediction.
We conduct experiments on MRScript dataset released by BIBREF0 . Our single and ensemble model achieve the accuracy of 83.52% and 84.84% on the official test set respectively. Our main contributions are as follows:
We propose a general fusion framework with two-layer fusion, which can fuse the passage, question, and choice simultaneously.
To collect multi-perspective fusion representations, we define three types of fusions, consisting of union fusion, difference fusion, and similarity fusion.
We extend the fusion method to multi-perspective to obtain deeper understanding of the passage, question, and choice.
We design several groups of experiments to evaluate the effectiveness of the three types of fusion and prove that our MPFN model outperforms all the other models. with an accuracy of 83.52%.
Related Work
MRC has gained significant popularity over the past few years. Several datasets have been constructed for testing the comprehension ability of a system, such as MCTest BIBREF6 , SQuAD BIBREF7 , BAbI BIBREF8 , TriviaQA BIBREF9 , RACE BIBREF10 , and NewsQA BIBREF11 . The types of passage, question and answer of these datasets are various. Each dataset focuses on one specific aspect of reading comprehension. Particularly, the MCScript BIBREF0 dataset concerns answering the question which requires using commonsense knowledge.
including Wikipedia articles, examinations, narrative stories, news articles. Answering questions in these datasets. Meanwhile, the question types and answer types vary differently. The answer type multiple choice, span-answer, exact match
Many architectures on MRC follow the process of representation, attention, fusion, and aggregation BIBREF12 , BIBREF2 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . BiDAF BIBREF12 fuses the passage-aware question, the question-aware passage, and the original passage in context layer by concatenation, and then uses a BiLSTM for aggregation. The fusion levels in current advanced models are categorized into three types by BIBREF14 , including word-level fusion, high-level fusion, and self-boosted fusion. They further propose a FusionNet to fuse the attention information from bottom to top to obtain a fully-aware representation for answer span prediction.
BIBREF16 present a DFN model to fuse the passage, question, and choice by dynamically determine the attention strategy.
On SemEval2018 Task 11, most of the models use the attention mechanism to build interactions among the passage, the question, and the choice BIBREF17 , BIBREF3 , BIBREF18 , BIBREF19 . The most competitive models are BIBREF17 , BIBREF3 , and both of them employ concatenation fusion to integrate the information. BIBREF17 utilizes choice-aware passage and choice-aware question to fuse the choice in word level. In addition, they apply the question-aware passage to fuse the passage in context level. Different from BIBREF17 , both the choice-aware passage and choice-aware question are fused into choice in the context level in BIBREF3 , which is the current state-of-the-art result on the MCSript dataset.
On NLI task, fusing the premise-aware hypothesis into the hypothesis is an effective and commonly-used method. BIBREF20 , BIBREF21 leverage the concatenation of the hypothesis and the hypothesis-aware premise to help improve the performance of their model. The element-wise subtraction and element-wise multiplication between the hypothesis and the hypothesis-aware premise are employed in BIBREF5 to enhance the concatenation. and further achieved the state-of-the-art results on Stanford Natural Language Inference BIBREF22 benchmark.
Almost all the models on CRC only use the union fusion. In our MPFN model, we design another two fusion methods to extend the perspective of fusion. We evaluate the MPFN model on MRC task and achieve the state-of-the-art result.
Model
The overview of our Multi-Perspective Fusion Network (MPFN) is shown in Fig. 1 . Given a narrative passage about a series of daily activities and several corresponding questions, a system requires to select a correct choice from two options for each question. In this paper, we denote $\bf {p=\lbrace p_1,p_2,...,p_{|p|}\rbrace }$ as the passage, $\bf {q=\lbrace q_1,q_2,...,q_{|q|}\rbrace }$ as a question, $\bf {c=\lbrace c_1,c_2,...,c_{|c|}\rbrace }$ as one of the candidate choice, and a true label $y^{*} \in \lbrace 0,1\rbrace $ . Our model aims to compute a probability for each choice and take the one with higher probability as the prediction label. Our model consists of three layers: an encoding layer, a context fusion layer, and an output layer. The details of each layer are described in the following subsections.
Encoding Layer
This layer aims to encode the passage embedding $p$ , the question embedding $q$ , and the choice embedding $c$ into context embeddings. Specially, we use a one-layer BiLSTM as the context encoder.
$$&\bar{c}_i = \text{BiLSTM}(c, i) , & i \in [1,2, \cdots ,|c|] \\ &\bar{p}_j = \text{BiLSTM}(p, j) , & j \in [1,2, \cdots ,|p|] \\ &\bar{q}_k = \text{BiLSTM}(q, k) , & k \in [1,2, \cdots ,|q|] $$ (Eq. 18)
The embeddings of $p$ , $q$ and $c$ are semantically rich word representations consisting of several kinds of embeddings. Specifically, the embeddings of passage and question are the concatenation of the Golve word embedding, POS embedding, NER embedding, Relation embedding and Term Frequency feature. And the embeddings of choice comprise the Golve word embedding, the choice-aware passage embedding, $c^p$ and choice-aware question embedding $c^q$ . The details about each embedding are follows:
Glove word embedding We use the 300-dimensional Glove word embeddings trained from 840B Web crawl data BIBREF23 . The out-of-vocabulary words are initialized randomly. The embedding matrix are fixed during training.
POS&NER embedding We leverage the Part-of-Speech (POS) embeddings and Named-Entity Recognition(NER) embeddings. The two embeddings $c_i^{pos} \text{and} c_i^{ner}$ are randomly initialized to 12d and 8d respectively, and updated during training.
Relation embedding Relations are extracted form ConceptNet. For each word in the choice, if it satisfies any relation with another word in the passage or the question, the corresponding relation will be taken out. If the relations between two words are multiple, we just randomly choose one. The relation embeddings $c_i^{rel}$ are generated in the similar way of POS embeddings. randomly initialized and updated during training as well.
Term Frequency Following BIBREF17 , we introduce the term frequency feature to enrich the embedding of each word. The calculation is based on English Wikipedia.
Choice-aware passage embedding The information in the passage that is relevant to the choice can help encode the choice BIBREF24 . To acquire the choice-aware passage embedding $c_i^p$ , we utilize dot product between non-linear mappings of word embeddings to compute the attention scores for the passage BIBREF25 .
$$& c_i^p = Attn(c_i,\lbrace p_j\rbrace _1^{|p|}) = \sum _{j=1}^{|p|} {\alpha }_{ij} p_j \\ & {\alpha }_{ij} \propto exp(S(c_i, p_j)), \quad S(c_i, p_j) = {ReLU(W{c_i})}^{T} ReLU(W {p_j})$$ (Eq. 19)
Choice-aware question embedding The choice relevant question information is also important for the choice. Therefore, we adopt the similar attention way as above to get the choice-aware question embedding $c_i^q=Attn(c_i, \lbrace q_k\rbrace _{1}^{|q|})$ .
The embeddings delivered to the BiLSTM are the concatenation the above components, where $p_j = [p_j^{glove}, p_j^{pos},p_j^{ner},p_j^{rel}, p_j^{tf} ]$ , $c_i = [c_i^{glove}, c_i^{p},c_i^{q}]$ , and $q_k = [q_k^{glove}, q_k^{pos}, q_k^{ner}, q_k^{rel},q_k^{tf} ]$ .
Context Fusion Layer
This is the core layer of our MPFN model. To take the union, different and similar information of the choice, passage, and question into consideration, three fusion functions are defined in this layer. In this layer, we define three fusion functions, which consider the union information, the different information, and the similar information of the choice, passage, and question.
Since we have obtained the choice context $\bar{c}_i$ , the passage context $\bar{p}_j$ , and the question context $\bar{q}_k$ in the encoding layer, we can calculate the choice-aware passage contexts $\tilde{c}^p_i$ and choice-aware question contexts $\tilde{c}^q_i$ . Then we deliver them together with the choice contexts $\bar{c}_i$ to the three fusion functions.
In this layer, we define three fusion functions to fuse the $\bar{c}_i$ , $\tilde{c}^p_j$ , and $\bar{c}^q_k$ simultaneously and multi-perspectively. The three fusion functions take the union information, the different information, and the similar information of the choice, passage, and question into consideration. To better integrate this information, we feed the three fusion outputs to FNN for aggregation.
Choice-aware passage context In this part, we calculate the choice-aware passage representations $\tilde{c}_i^p= \sum _{j}{\beta }_{ij} \bar{p}_j$ . For model simplification, here we use dot product between choice contexts and passage contexts to compute the attention scores ${\beta }_{ij}$ :
$$&{\beta }_{ij}= \frac{exp({\bar{c}_i^T \bar{p}_j)}}{\sum \limits {_{j^\prime =1}^{|p|}exp(\bar{c}_i^T \bar{p}_{j^\prime })}}$$ (Eq. 21)
Choice-aware question context In a similar way as above, we get the choice-aware question context $\tilde{c}_i^q= \sum _{j}{\beta }_{ik} \bar{q}_k$ . The ${\beta }_{ik}$ is the dot product of the choice context $\bar{c}_i$ and question context $\bar{q}_k$ .
Multi-perspective Fusion This is the key module in our MPFN model. The goal of this part is to produce multi-perspective fusion representation for the choice $\bar{c}_i$ , the choice-aware passage $\tilde{c}^p_i$ , and the choice-aware question $\tilde{c}^q_i$ . In this paper, we define fusion in three perspectives: union, difference, and similarity. Accordingly, we define three fusion functions to describe the three perspectives. The outputs and calculation of the three functions are as follows: : concatenation $;$ , element-wise dot product and element-wise subtraction. $f^u$ , $f^d$ , and $f^s$ All of the three fusion functions take the choice context, the choice-aware passage, and the choice-aware question as input.
$$&u_i = [\bar{c}_i \, ; \tilde{c}_i^p \,; \tilde{c}^q_i] ,\\ &d_i = ( \bar{c}_i - \tilde{c}_i^p)\odot (\bar{c_i} - \tilde{c}_i^q) ,\\ &s_i = \bar{c}_i \odot \tilde{c}_i^p \odot \tilde{c}_i^q ,$$ (Eq. 22)
where $; \,$ , $-$ , and $\odot $ represent concatenation, element-wise subtraction, and element-wise multiplication respectively. And $u_i$ , $d_i$ , and $s_i$ are the representations from the union, difference and similarity perspective respectively.
The union perspective is commonly used in a large bulk of tasks BIBREF21 , BIBREF14 , BIBREF2 . It can see the whole picture of the passage, the question, and the choice by concatenating the $\tilde{c}^p_i$ and $\tilde{c}^q_i$ together with $c_i$ . While the difference perspective captures the different parts between choice and passage, and the difference parts between choice and question by $\bar{c_i} - \tilde{c}_i^p$ and $\bar{c_i} - \tilde{c}_i^q$ respectively. The $\odot $ in difference perspective can detect the two different parts at the same time and emphasize them. In addition, the similarity perspective is capable of discovering the similar parts among the passage, the question, and the choice.
To map the three fusion representations to lower and same dimension, we apply three different FNNs with the ReLU activation to $u_i$ , $d_i$ , and $s_i$ . The final output $g_i$ is the concatenation of the results of the three FNNs, which represents a global perspective representation.
$$g_i=[f^u(u_i),f^d(d_i),f^s(s_i)] $$ (Eq. 23)
Output Layer
The output layer includes a self-attention layer and a prediction layer. Following BIBREF26 , we summarize the global perspective representation $\lbrace g_i\rbrace _1^{|c|}$ to a fixed length vector $r$ . We compute the $r= \sum _{i=1}^{|c|} b_i g_i$ , where $b_j$ is the self-weighted attention score :
$$&b_i = \frac{exp(W{g}_i)}{\sum \limits {_{i^\prime =1}^{|c|}exp(W {g}_{i^\prime })}}$$ (Eq. 25)
In the prediction layer, we utilize the output of self-attention $r$ to make the final prediction.
The final output y is obtained by transforming the $\mathbf {v}$ to a scalar and then apply a sigmoid activation to map it to a probability.
Experimental Settings
Data We conduct experiments on the MCScript BIBREF0 , which is used as the official dataset of SemEval2018 Task11. This dataset constructs a collection of text passages about daily life activities and a series of questions referring to each passage, and each question is equipped with two answer choices. The MCScript comprises 9731, 1411, and 2797 questions in training, development, and test set respectively. For data preprocessing, we use spaCy for sentence tokenization, Part-of-Speech tagging, and Name Entity Recognization. The relations between two words are generated by ConceptNet. The MCScript is a recently released dataset, which collects 2,119 narrative texts about daily events along with 13,939 questions. In this dataset, 27.4% questions require commonsense inference.
Parameters We use the standard cross-entropy function as the loss function. We choose Adam BIBREF27 with initial momentums for parameter optimization. As for hyper-parameters, we set the batch size as 32, the learning rate as 0.001, the dimension of BiLSTM and the hidden layer of FNN as 123. The embedding size of Glove, NER, POS, Relation are 300, 8, 12, 10 respectively. The dropout rate of the word embedding and BiLSTM output are 0.386 and 0.40 respectively.
Experimental Results
Table 2 shows the results of our MPFN model along with the competitive models on the MCScript dataset. The TriAN achieves 81.94% in terms of test accuracy, which is the best result of the single model. The best performing ensemble result is 84.13%, provided by HMA, which is the voting results of 7 single systems.
Our single MPFN model achieves 83.52% in terms of accuracy, outperforming all the previous models. The model exceeds the HMA and TriAN by approximately 2.58% and 1.58% absolute respectively. Our ensemble model surpasses the current state-of-the-art model with an accuracy of 84.84%. We got the final ensemble result by voting on 4 single models. Every single model uses the same architecture but different parameters.
Discussion of Multi-Perspective
To study the effectiveness of each perspective, we conduct several experiments on the three single perspectives and their combination perspective. Table 3 presents their comparison results. The first group of models are based on the three single perspectives, and we can observe that the union perspective performs best compared with the difference and similarity perspective. Moreover, the union perspective achieves 82.73% in accuracy, exceeding the TriAN by 0.79% absolute. We can also see that the similarity perspective is inferior to the other two perspectives.
The second group of models in the Table 3 are formed from two perspectives. Compared with the single union perspective, combining the difference perspective with the union perspective can improve 0.11%. Composing union and similarity fusion together doesn't help the training. To our surprise, the combination of similarity perspective and difference perspective obtains 83.09% accuracy score.
The last model is our MPFN model, which performing best. The final result indicates that composing the union perspective, difference perspective, and similarity perspective together to train is helpful.
Many advanced models employ a BiLSTM to further aggregate the fusion results. To investigate whether a BiLSTM can assist the model, we apply another BiLSTM to the three fusion representations in Formula 23 respectively and then put them together. The results are shown in the second column in Table 3 , which indicate that the BiLSTM does not help improve the performance of the models.
Encoding Inputs Ablation
In the section, we conduct ablation study on the encoding inputs to examine the effectiveness each component. The experiment results are listed in Table 3 . In Section "Encoding Layer" , we describe that our encoding inputs comprise six components: POS embedding, NER embedding, Relation embedding, Term Frequency, choice-aware passage embedding $C^p$ and choice-aware question embedding $C^q$ .
From the best model, if we remove the POS embedding and NER embedding, the accuracy drops by 0.82% and 0.9%. Without Relation embedding, the accuracy drops to 81.98%, revealing that the external relations are helpful to the context fusions. Without Term Frequency, the accuracy drops by approximately 1.61%. This behavior suggests that the Term Frequency feature has a powerful capability to guide the model.
After removing the $C^p$ , we find the performance degrades to 81.62%. This demonstrates that information in the passage is significantly important to final performance. If we remove $C^q$ from the MPFN, the accuracy drops to 82.16%. If we remove the word level fusion completely, we will obtain an 81.66% accuracy score. These results demonstrate that each component is indispensable and the bottom embeddings are the basic foundations of the top layer fusions.
Influence of Word-level Interaction
In this section, we explore the influence of word-level interaction to each perspective. Fig 2 reports the overall results of how each perspective can be affected by the lower level interaction. The $C^p$ and the $C^q$ represent the choice-aware passage embedding and the choice-aware question embedding respectively. We can observe that the results of $[C;C^p]$ , $[C;C^q]$ , and $[C;C^p;C^q]$ are all higher than the result of $C$ alone, indicating the effectiveness of word embedding interaction.
Both the union fusion and difference fusion can achieve more than 80% accuracy, while the similarity fusion is very unstable. We also observe that the difference fusion is comparable with the union fusion, which even works better than the union fusion when the information of $C^p$ is not introduced into the input of encoding. The similarity fusion performs poorly in $C$ and $[C;C^q]$ , while yielding a huge increase in the remaining two groups of experiments, which is an interesting phenomenon. We infer that the similarity fusion needs to be activated by the union fusion.
In summary, we can conclude that integrate the information of $C^p$ into $C$ can greatly improve the performance of the model. Combining $C^q$ together with $C^p$ can further increase the accuracy. The information in the passage is richer than the question The overall conclusion
Visualization
In this section, we visualize the union and difference fusion representations and show them in Fig 3 . And, we try to analyze their characteristics and compare them to discover some connections. The values of similarity fusion are too small to observe useful information intuitively, so we do not show it here. We use the example presented in Table 1 for visualization, where the question is Why didn't the child go to bed by themselves? and the corresponding True choice is The child wanted to continue playing.
The left region in Fig 3 is the union fusion. The most intuitive observation is that it captures comprehensive information. The values of child, wanted, playing are obvious higher than other words. This is consistent with our prior cognition, because the concatenation operation adopted in union fusion does not lose any content. While the difference union shows in the right region in Fig 3 focuses on some specific words. By further comparison, we find that the difference fusion can pay attention to the content ignored by the union fusion. What's more, the content acquired by the union would not be focused by the difference again. In other words, the union fusion and difference fusion indeed can emphasize information from the different perspective. Due to space limitation and
Conclusion
In this paper, we propose the Multi-Perspective Fusion Network (MPFN) for the Commonsense Reading Comprehension (CMC) task. We propose a more general framework for CRC by designing the difference and similarity fusion to assist the union fusion. Our MPFN model achieves an accuracy of 83.52% on MCScript, outperforming the previous models. The experimental results show that union fusion based on the choice-aware passage, the choice-aware question, and the choice can surpass the TriAN and HMA model. The difference fusion performs stably, which is comparable with the union fusion. We find that the word-level union fusion can significantly influence the context-level fusion. The choice-aware passage word embedding can activate the similarity fusion. We find that combining the similar parts and the difference parts together can obtain the best performance among the two-perspective models. By taking the three types of fusion methods into consideration, our MPFN model achieves a state-of-the-art result.
Acknowledgements
This work is funded by Beijing Advanced Innovation for Language Resources of BLCU, the Fundamental Research Funds for the Central Universities in BLCU (17PT05), the Natural Science Foundation of China (61300081), and the Graduate Innovation Fund of BLCU (No.18YCX010). | SLQA, Rusalka, HMA Model (single), TriAN (single), jiangnan (ensemble), MITRE (ensemble), TriAN (ensemble), HMA Model (ensemble) |
acc8d9918d19c212ec256181e51292f2957b37d7 | acc8d9918d19c212ec256181e51292f2957b37d7_0 | Q: What are the differences with previous applications of neural networks for this task?
Text: Introduction
The Internet provides instant access to a wide variety of online content, news included. Formerly, users had static preferences, gravitating towards their trusted sources, incurring an unwavering sense of loyalty. The same cannot be said for current trends since users are likely to go with any source readily available to them.
In order to stay in business, news agencies have switched, in part, to a digital front. Usually, they generate revenue by (1) advertisements on their websites, or (2) a subscription based model for articles that might interest users. However, since the same information is available via multiple sources, no comment can be made on the preference of the reader. To lure in more readers and increase the number of clicks on their content, subsequently increasing their agency's revenue, writers have begun adopting a new technique - clickbait.
The concept of clickbait is formalised as something to encourage readers to click on hyperlinks based on snippets of information accompanying it, especially when those links lead to content of dubious value or interest. Clickbaiting is the intentional act of over-promising or purposely misrepresenting - in a headline, on social media, in an image, or some combination - what can be expected while reading a story on the web. It is designed to create and, consequently, capitalise on the Loewenstein information gap BIBREF0 . Sometimes, especially in cases where such headlines are found on social media, the links can redirect to a page with an unoriginal story which contains repeated or distorted facts from the original article itself.
Our engine is built on three components. The first leverages neural networks for sequential modeling of text. Article title is represented as a sequence of word vectors and each word of the title is further converted into character level embeddings. These features serve as input to a bidirectional LSTM model. An affixed attention layer allows the network to treat each word in the title in a differential manner. The next component focuses on the similarity between the article title and its actual content. For this, we generate Doc2Vec embeddings for the pair and act as input for a Siamese net, projecting them into a highly structured space whose geometry reflects complex semantic relationships. The last part of this system attempts to quantify the similarity of the attached image, if any, to the article title. Finally, the output of each component is concatenated and sent as input to a fully connected layer to generate a score for the task.
Related Work
The task of automating clickbait detection has risen to prominence fairly recently. Initial attempts for the same have worked on (1) news headlines, and (2) heavy feature engineering for the particular dataset. BIBREF1 's work is one of the earliest pieces of literature available in the field, focusing on an aggregation of news headlines from previously categorised clickbait and non-clickbait sources. Apart from defining different types of clickbait, they emphasise on the presence of language peculiarities exploited by writers for this purpose. These include qualitative informality metrics and use of forward references in the title to keep the reader on the hook. The first instance of detecting clickbait across social media can be traced to BIBREF2 , hand-crafting linguistic features, including a reference dictionary of clickbait phrases, over a dataset of crowdsourced tweets BIBREF3 . However, BIBREF4 argued that work done specifically for Twitter had to be expanded since clickbait was available throughout the Internet, and not just social networks.
It was not until BIBREF5 that neural networks were tried out for the task as the authors used the same news dataset as BIBREF4 to develop a deep learning based model to detect clickbait. They used distributional semantics to represent article titles, and BiLSTM to model sequential data and its dependencies. Since then, BIBREF6 has also experimented with Twitter data BIBREF3 deploying a BiLSTM for each of the textual features (post-text, target-title, target-paragraphs, target-description, target-keywords, post-time) available in the corpus, and finally concatenating the dense output layers of the network before forwarding it to a fully connected layer. Since it was proposed in BIBREF7 , the attention mechanism has been used for a variety of text-classification tasks, such as fake news detection and aspect-based sentiment analysis. BIBREF8 used a self-attentive BiGRU to infer the importance of tweet tokens in predicting the annotation distribution of the task.
One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task.
Model Architecture
In this section, we present our hybrid approach to clickbait detection. We first explain the three individual components followed by their fusion, which is our proposed model. These components are (1) BiLSTM with attention, (2) Siamese Network on Text Embeddings, and (3) Siamese Network on Visual Embeddings. An overview of the architecture can be seen in Figure 1.
We start with an explanation of the features used in the first component of the model.
Distributed Word Embeddings
Considering the effectiveness of distributional semantics in modeling language data, we use a pre-trained 300 dimensional Word2Vec BIBREF9 model trained over 100 billion words in the Google News corpus using the Continuous Bag of Words architecture. These map the words in a language to a high dimensional real-valued vectors to capture hidden semantic and syntactic properties of words, and are typically learned from large, unannotated text corpora. For each word in the title, we obtain its equivalent Word2Vec embeddings using the model described above.
Character Level Word Embeddings
Character level word embeddings BIBREF10 capture the orthographic and morphological features of a word. Apart from this, using them is a step toward mitigating the problem of out-of-vocabulary (OoV) words. In such a case, the word can be embedded by its characters using character level embedding. We follow BIBREF5 and first initialize a vector for every character in the corpus. The vector representation of each word is learned by applying 3 layers of a 1-dimensional Convolutional Neural Network BIBREF11 with ReLU non-linearity on each vector of character sequence of that word and finally max-pooling the sequence for each convolutional feature.
Document Embeddings
Doc2Vec BIBREF12 is an unsupervised approach to generate vector representations for slightly larger bodies of text, such as sentences, paragraphs and documents. It has been adapted from Word2Vec BIBREF9 which is used to generate vectors for words in large unlabeled corpora. The vectors generated by this approach come handy in tasks like calculating similarity metrics for sentences, paragraphs and documents. In sequential models like RNNs, the word sequence is captured in the generated sentence vectors. However, in Doc2Vec, the representations are order independent. We use GenSim BIBREF13 to learn 300 dimensional Doc2Vec embeddings for each target description and post title available.
Pre-trained CNN Features
As seen in various visual understanding problems recently, image descriptors trained using Convolutional Neural Networks over large amounts of data such as ImageNet have proven to be very effective. The implicit learning of spatial layout and object semantics in the later layers of the network from very large datasets has contributed to the success of these features. We use a pre-trained network of VGG-19 architecture BIBREF14 trained over the ImageNet database (ILSVRC-2012) and extract CNN features. We use the output of the fully-connected layer (FC7), which has 4096 dimensions, as feature representations for our architecture.
We now go into detail about the components of the model, individual and combined, and how the parameters are learned.
Bidirectional LSTM with Attention
Recurrent Neural Network (RNN) is a class of artificial neural networks which utilizes sequential information and maintains history through its intermediate layers. A standard RNN has an internal state whose output at every time-step which can be expressed in terms of that of previous time-steps. However, it has been seen that standard RNNs suffer from a problem of vanishing gradients BIBREF15 . This means it will not be able to efficiently model dependencies and interactions between words that are a few steps apart. LSTMs are able to tackle this issue by their use of gating mechanisms. For each record in the dataset, the content of the post as well as the content of the related web page is available. We convert the words from the title of both attributes into the previously mentioned types of embeddings to act as input to our bidirectional LSTMs.
$(\overrightarrow{h}_1, \overrightarrow{h}_2, \dots , \overrightarrow{h}_R)$ represent forward states of the LSTM and its state updates satisfy the following equations:
$$\big [\overrightarrow{f_t},\overrightarrow{i_t},\overrightarrow{o_t}\big ] = \sigma \big [ \overrightarrow{W} \big [\overrightarrow{h}_{t-1},\overrightarrow{r_t}\big ] + \overrightarrow{b}\big ]$$ (Eq. 3)
$$\overrightarrow{l_t} = \tanh \big [\overrightarrow{V} \big [\overrightarrow{h}_{t-1}, \overrightarrow{r_t}\big ] + \overrightarrow{d}\big ]$$ (Eq. 4)
here $\sigma $ is the logistic sigmoid function, $\overrightarrow{f_t}$ , $\overrightarrow{i_t}$ , $\overrightarrow{o_t}$ represent the forget, input and output gates respectively. $\overrightarrow{r_t}$ denotes the input at time $t$ and $\overrightarrow{h_t}$ denotes the latent state, $\overrightarrow{b_t}$ and $\overrightarrow{d_t}$ represent the bias terms. The forget, input and output gates control the flow of information throughout the sequence. $\overrightarrow{W}$ and $\overrightarrow{f_t}$0 are matrices which represent the weights associated with the connections.
$(\overleftarrow{h}_1, \overleftarrow{h}_2, \dots , \overleftarrow{h}_R)$ denote the backward states and its updates can be computed similarly.
The number of bidirectional LSTM units is set to a constant K, which is the maximum length of all title lengths of records used in training. The forward and backward states are then concatenated to obtain $(h_1, h_2, \dots , h_K)$ , where
$$h_i = \begin{bmatrix} \overrightarrow{h}_i \\ \overleftarrow{h}_i \end{bmatrix}$$ (Eq. 7)
Finally, we are left with the task of figuring out the significance of each word in the sequence i.e. how much a particular word influences the clickbait-y nature of the post. The effectiveness of attention mechanisms have been proven for the task of neural machine translation BIBREF7 and it has the same effect in this case. The goal of attention mechanisms in such tasks is to derive context vectors which capture relevant source side information and help predict the current target word. The sequence of annotations generated by the encoder to come up with a context vector capturing how each word contributes to the record's clickbait quotient is of paramount importance to this model. In a typical RNN encoder-decoder framework BIBREF7 , a context vector is generated at each time-step to predict the target word. However, we only need it for calculation of context vector for a single time-step.
$$c_{attention} = \sum _{j=1}^{K}\alpha _jh_j$$ (Eq. 8)
where, $h_1$ ,..., $h_K$ represents the sequence of annotations to which the encoder maps the post title vector and each $\alpha _j$ represents the respective weight corresponding to each annotation $h_j$ . This component is represented on the leftmost in Figure 1.
Siamese Net with Text Embeddings
The second component of our model is a Siamese net BIBREF16 over two textual features in the dataset. Siamese networks are designed around having symmetry and it is important because it's required for learning a distance metric. We use them to find the similarity between the title of the record and its target description. The words in the title and in the target description are converted into their respective Doc2Vec embeddings and concatenated, after which they are considered as input into a Siamese network. A visual representation of this can be found in the middle of Figure 1.
Siamese Neural Network with Visual Embeddings
The final component of our hybrid model is also a Siamese net. However, it considers visual information available in the dataset, and sets our model apart from other approaches in this field. The relevance of the image attached to the post can be quantified by capturing its similarity with the target description. The VGG-19 architecture outputs a 4096 dimensional vector for each image which, in turn, is fed as input into a dense layer to convert each representation to a 300 dimensional vector. This serves as one input to the visual Siamese net. The target description is converted into its 300 dimensional vector representation by passing it through the pre-trained Doc2Vec model, which acts as the second input for the network. It is the rightmost part of Figure 1.
Fusion of the components
To combine the components and complete our hybrid model, the output from each of the three parts is concatenated and subsequently acts as input for a fully connected layer. This layer finally gives as its output the probability/extent that a post, together with its related information, can be considered clickbait.
Learning the Parameters
We use binary cross-entropy as the loss optimization function for our model. The cross-entropy method BIBREF17 is an iterative procedure where each iteration can be divided into two stages:
(1) Generate a random data sample (vectors, trajectories etc.) according to a specified mechanism.
(2) Update the parameters of the random mechanism based on the data to produce a "better" sample in the next iteration.
Evaluation Results
The model was evaluated over a collection of 19538 social media posts BIBREF3 , each containing supplementary information like target description, target keywords and linked images. We performed our experiments with the aim of increasing the accuracy and F1 score of the model. Other metrics like mean squared error (MSE) were also considered.
Training
We randomly partition the training set into training and validation set in a 4:1 ratio. This ensures that the two sets do not overlap. The model hyperparameters are tuned over the validation set. We initialise the fully connected network weights with the uniform distribution in the range $-\sqrt{{6}/{(fanin + fanout)}}$ and $\sqrt{{6}/{(fanin + fanout)}}$ BIBREF18 . We used a batch size of 256 and adadelta BIBREF19 as a gradient based optimizer for learning the parameters of the model.
Comparison with other models
In Table 1, we compare our model with the existing state-of-the-art for the dataset used and other models which have employed similar techniques to accomplish the task. Calculation and comparison across these metrics was conducted on TIRA BIBREF2 , a platform that offers evaluation as a service. It is clear that our proposed model outperforms the previous feature engineering benchmark and other work done in the field both in terms of F1 score and accuracy of detection.
Conclusion
In this work, we have come up with a multi-strategy approach to tackle the problem of clickbait detection across the Internet. Our model takes into account both textual and image features, a multimedia approach, to score the classify headlines. A neural attention mechanism is utilised over BIBREF5 to improve its performance, simultaneously adding Siamese nets for scoring similarity between different attributes of the post. To build on this approach, we would like to explore better image embedding techniques to better relate it to the article. | This approach considers related images |
6f2f304ef292d8bcd521936f93afeec917cbe28a | 6f2f304ef292d8bcd521936f93afeec917cbe28a_0 | Q: How much improvement is gained from the proposed approaches?
Text: Introduction
Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success, MLE-trained neural sequence models have been shown to exhibit issues such as length bias BIBREF3, BIBREF4 and degenerate repetition BIBREF5. These issues are suspected to be related to the maximum likelihood objective's local normalization, which results in a discrepancy between the learned model's distribution and the distribution induced by the decoding algorithm used to generate sequences BIBREF6, BIBREF7. This has prompted the development of alternative decoding methods BIBREF8, BIBREF5 and training objectives BIBREF9, BIBREF10. In this paper, we formalize and study this discrepancy between the model and the decoding algorithm.
We begin by formally defining recurrent neural language models, a family that encompasses neural models used in practice, such as recurrent neural networks BIBREF11, BIBREF12, BIBREF13, and transformers BIBREF14. Next, we formally define a decoding algorithm – a function that induces a distribution over sequences given a recurrent language model and a context distribution – which is used to obtain probable sequences from a model. In this paper, we show that the distribution induced by a decoding algorithm can contradict this intended use; instead, the decoding algorithm may return improbable, infinite-length sequences.
Our main finding is that a sequence which receives zero probability under a recurrent language model's distribution can receive nonzero probability under the distribution induced by a decoding algorithm. This occurs when the recurrent language model always ranks the sequence termination token outside of the set of tokens considered at each decoding step, yielding an infinite-length, zero probability sequence. This holds whenever the decoding algorithm is incomplete, in the sense that the algorithm excludes tokens from consideration at each step of decoding, which is the case for common methods such as greedy search, beam search, top-$k$ sampling BIBREF15, and nucleus sampling BIBREF5. We formalize our main finding using the notion of consistency BIBREF16 – whether a distribution assigns probability mass only to finite sequences – and prove that a consistent recurrent language model paired with an incomplete decoding algorithm can induce an inconsistent sequence distribution.
Based on the insight that inconsistency occurs due to the behavior of the termination token under incomplete decoding, we develop two methods for addressing inconsistency. First, we propose consistent sampling methods which guarantee that the termination token is not excluded from selection during decoding. Second, we introduce a self-terminating recurrent language model which ensures that the termination token is eventually ranked above all others, guaranteeing consistency under incomplete decoding.
To empirically measure inconsistency, we decode sequences from trained recurrent language models and measure the proportion of sequences with lengths far exceeding the maximum training sequence length. Our experiments on the Wikitext2 dataset BIBREF17 suggest that inconsistency occurs in practice when using incomplete decoding methods, while the proposed consistent sampling methods and self-terminating model parameterization prevent inconsistency and maintain language modeling quality.
The theoretical analysis reveals defects of existing decoding algorithms, providing a way to develop future models, inference procedures, and learning algorithms. We present methods related to sampling and model parameterization, but there are more directions which we leave to the future; we close with directions related to sequence-level learning.
Background
We begin our discussion by establishing background definitions. First, we define a sequence which is the main object of our investigation.
Definition 2.1 (Sequence) A sequence $Y$ is an ordered collection of items from a predefined finite vocabulary $V$. A sequence of finite length always ends with a special token $\left<\text{eos}\right>\in V$ that only appears at the end of a sequence.
Each model we consider generates a sequence conditioned on context information, such as a prefix in sentence completion. To consider this, we define a context distribution.
Definition 2.2 (Context distribution) A context distribution $p(C)$ is a probability distribution defined over a set $\mathcal {C}$. An element $C\in \mathcal {C}$ is called a context.
Background ::: Recurrent Language Models
A recurrent language model is an autoregressive model of a sequence distribution, where each conditional probability is parameterized with a neural network. Importantly, we assume that all tokens in a sequence are dependent on each other under a recurrent language model. This allows us to avoid cases in which the model degenerates to a Markovian language model, such as an $n$-gram model with a finite $n$.
Definition 2.3 (Recurrent language model) A recurrent language model $p_\theta $ is a neural network that computes the following conditional probability at each time step
where $h_t = f_{\theta }(y_t, h_{t-1})$ and $h_0 = g_{\theta }(C)$, and $u,c,\theta $ are parameters. A recurrent language model thereby computes the probability of a sequence $Y=(y_1, \ldots , y_T)$ by
where $y_{<t}=(y_1,\ldots ,y_{t-1})$. This distribution satisfies
Practical variants of the recurrent language model differ by the choice of transition function $f_{\theta }$ BIBREF11, BIBREF13, BIBREF12, BIBREF14. The use of softmax BIBREF18 implies that every unique token in the vocabulary is considered at every location of a sequence.
Remark 2.1 Under the conditional distribution of a recurrent language model, every token $v\in V$ is assigned a positive probability. This implies that $0 < p_\theta (v\,|\,y_{<t}, C) < 1.$ In addition, it follows that any finite sequence is probable by a recurrent language model under any context, i.e., $p_{\theta }(Y\,|\,C) > 0$ for any sequence $Y$ of finite length.
Background ::: Decoding Algorithms
Because it is intractable to decode the most probable sequence, it is necessary in practice to use an approximate decoding algorithm.
Definition 2.4 (Decoding algorithm) A decoding algorithm $\mathcal {F}(p_{\theta }, C)$ is a function that generates a sequence $\tilde{Y}$ given a recurrent language model $p_{\theta }$ and context $C$. Let $q_{\mathcal {F}}$ denote the distribution induced by the decoding algorithm $\mathcal {F}$.
We consider two families of decoding algorithms. In our analysis we only consider decoding algorithms that decode in a single pass, forward in time, without modifying previously selected tokens.
Background ::: Decoding Algorithms ::: Stochastic decoding.
The first family consists of stochastic algorithms. Among them, ancestral sampling is asymptotically unbiased and can be used for finding the most probable sequence, although it requires a substantial number of samples to achieve a low-variance estimate.
Definition 2.5 (Ancestral sampling) Ancestral sampling $\mathcal {F}_{\text{anc}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from $p_{\theta }(y_t\,|\,\tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$:
In order to avoid the high variance, two approximate stochastic decoding algorithms have recently been proposed and tested with recurrent language models. Top-$k$ sampling considers only a subset of the $k$ most probable tokens from the vocabulary at a time, while nucleus sampling considers only the minimal subset of most probable tokens whose total probability is higher than a predefined threshold.
Definition 2.6 (Top-$k$ sampling BIBREF15) Top-$k$ sampling $\mathcal {F}_{\text{top-k}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution:
Definition 2.7 (Nucleus sampling BIBREF5) Nucleus sampling $\mathcal {F}_{\text{nuc-}\mu }$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution. Let $v_1,\ldots ,v_{|V|}$ denote tokens in $V$ such that $p_{\theta }(v_i\,|\,y_{<t},C) \ge p_{\theta }(v_j\,|\,y_{<t},C)$ for all $i < j$, and define
where $V_{\mu } = \left\lbrace v_1, \cdots , v_{k_\mu } \right\rbrace $ with
Background ::: Decoding Algorithms ::: Deterministic decoding.
The other family consists of deterministic decoding algorithms, where a token is selected deterministically according to a rule at each decoding step. The most naive algorithm, called greedy decoding, simply takes the most probable token at each step.
Definition 2.8 (Greedy decoding) Greedy decoding $\mathcal {F}_{\text{greedy}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively selecting the most likely token from $p_{\theta }(y_t | \tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$:
In contrast to greedy decoding, beam search operates on the level of partial sequences or prefixes.
Definition 2.9 (Prefix) A prefix $\rho _t$ is an ordered collection of items from $V$. The score of a prefix is
where $\rho _t[\tau ]$ is a token at time $\tau $ from $\rho _t$.
Starting from a set of empty prefixes, at each iteration a new prefix set is formed by expanding each prefix, then choosing the highest scoring expanded prefixes.
Definition 2.10 (Beam search) Beam search with width $k$, $\mathcal {F}_{\text{beam}-k}$, generates a sequence from a recurrent language model $p_{\theta }$ by maintaining a size-$k$ prefix set $\mathrm {P}_t^{\text{top}}$. Starting with $P_0^{top}=\varnothing $, at each iteration $t\in \lbrace 1,2,\ldots \rbrace $ beam search forms a new prefix set $\mathrm {P}_t^{\text{top}}$ by expanding the current set, $\mathrm {P}_t = \bigcup _{\rho \in \mathrm {P}_{t-1}^{\text{top}}} \lbrace \rho \circ v\, |\, v\in V\rbrace $ (where $\rho \circ v$ is concatenation), then choosing the $k$ highest scoring elements,
Any $\rho \in \mathrm {P}_t^{\text{top}}$ ending with $\left<\text{eos}\right>$ is restricted from being expanded further, and is added to a set $S$. Beam search ends when $S$ contains $k$ sequences, and returns the highest scoring sequence in $S$.
Background ::: Decoding Algorithms ::: Incompleteness.
Other than ancestral sampling, the decoding algorithms above are incomplete in that they only consider a strict subset of the the full vocabulary $V$ at each time step, aside from the trivial case of $k=|V|$.
Definition 2.11 (Incomplete Decoding) A decoding algorithm $\mathcal {F}$ is incomplete when for each context $C$ and prefix $y_{<t}$, there is a strict subset $V^{\prime }_t\subsetneq V$ such that
Consistency of a Decoding Algorithm ::: Definition of consistency.
A recurrent language model $p_{\theta }$ may assign a positive probability to an infinitely long sequence, in which case we call the model inconsistent. This notion of consistency was raised and analyzed earlier, for instance by BIBREF19 and BIBREF16, in terms of whether the distribution induced by $p_{\theta }$ is concentrated on finite sequences. We extend their definition to account for the context $C$.
Definition 3.1 (Consistency of a recurrent language model) A recurrent language model is consistent under a context distribution $p(C)$ if $p_{\theta }(|Y|=\infty ) = 0$. Otherwise, the recurrent language model is said to be inconsistent.
Any sequence decoded from a consistent model for a given probable context is guaranteed to terminate.
Lemma 3.1 If a recurrent language model $p_{\theta }$ is consistent, $p_{\theta }(|Y|=\infty \,|\,C)=0$ for any probable context $C$.
Next, we establish a practical condition under which a recurrent language model is consistent.
Lemma 3.2 A recurrent language model $p_{\theta }$ is consistent if $\Vert h_t\Vert _p$ is uniformly bounded for some $p\ge 1$.
[Proof sketch] If $\Vert h_t\Vert _p$ is bounded, then each $u_v^\top h_t$ is bounded, hence $p_{\theta }(\left<\text{eos}\right>| y_{<t}, C)>\xi >0$ for a constant $\xi $. Thus $p_{\theta }(|Y|=\infty ) \le \lim _{t\rightarrow \infty } (1 - \xi )^t = 0$, meaning that $p_{\theta }$ is consistent.
Although this condition is practical because layer normalization or bounded activation functions BIBREF11, BIBREF12, BIBREF14 result in bounded $h_t$, we show that even if a recurrent language model is consistent, a decoding algorithm may produce an infinite-length sequence. We formalize this discrepancy using the consistency of a decoding algorithm.
Definition 3.2 (Consistency of a decoding algorithm) A decoding algorithm $\mathcal {F}$ is consistent with respect to a consistent recurrent language model $p_{\theta }$ under a context distribution $p(C)$ if the decoding algorithm $\mathcal {F}$ preserves the consistency of the model $p_{\theta }$, that is, $q_{\mathcal {F}}(|Y|=\infty )=0$.
When a consistent recurrent language model $p_{\theta }$ and a decoding algorithm $\mathcal {F}$ induce a consistent distribution $q_{\mathcal {F}}$, we say that $p_{\theta }$ paired with $\mathcal {F}$ is consistent. For instance, any consistent recurrent language model paired with ancestral sampling is consistent, because the induced distribution $q_{\mathcal {F}_{\text{anc}}}$ is the same as the distribution of the original model. We also have an analogue of Lemma UNKREF21.
Lemma 3.3 A consistent decoding algorithm with respect to a consistent recurrent language model decodes only probable sequences. That is, if $q_{\mathcal {F}}(Y\,|\,C)>0$, then $p_{\theta }(Y\,|\,C)>0$ for any probable context $C$.
Consistency of a Decoding Algorithm ::: Inconsistency of incomplete decoding.
Any incomplete decoding algorithm (Definition UNKREF18) can be inconsistent regardless of the context distribution, because there is a recurrent language model that places $\left<\text{eos}\right>$ outside of $V^{\prime }_t$ at every step of decoding. To show this, we construct a consistent recurrent language model whose distribution induced by an incomplete decoding algorithm is inconsistent.
Theorem 3.4 (Inconsistency of an incomplete decoding algorithm) There exists a consistent recurrent language model $p_{\theta }$ from which an incomplete decoding algorithm $\mathcal {F}$, that considers only up to $(|V|-1)$-most likely tokens according to $p_{\theta }(y_t\,|\,y_{<t},C)$ at each step $t$, finds a sequence $\tilde{Y}$ whose probability under $p_{\theta }$ is 0 for any context distribution.
We prove this theorem by constructing a $\tanh $ recurrent network. We define the recurrent function $f_{\theta }$ as
where $e(y_{t}) \in \mathbb {R}^{|V|}$ is a one-hot representation of $y_t$, $W_h \in \mathbb {R}^{d \times d}$ where every entry is positive, and $I$ is an identity matrix of size $|V| \times |V|$. $h_0 = g_{\theta }(C)$ is constructed to consist of positive values only. Because each element of $|h_t|$ is bounded by 1, the constructed recurrent language model $p_{\theta }$ is consistent by Lemma UNKREF23.
For $v \ne \left<\text{eos}\right>$, we set $u_v$ (see Definition UNKREF4) to be
where all elements of $\bar{u}_v$ are positive and $e(v)$ is a one-hot representation of $v$. $c_v$ is set to zero. Next, let
where all elements of $\bar{u}_{\left<\text{eos}\right>}$ are negative.
This defines a valid recurrent language model (Definition UNKREF4), since the conditional distribution at each time $t$ is influenced by all the previous tokens. More specifically, the logit of a token $v$ depends on $\sum _{t^{\prime }=1}^t {1}(y_{t^{\prime }} = v)$, where 1 is an indicator function.
This recurrent language model always outputs positive logits for non-$\left<\text{eos}\right>$ tokens, and outputs negative logits for the $\left<\text{eos}\right>$ token. This implies $p(\left<\text{eos}\right>|\,y_{<t}, C) < p(v\,|\,y_{<t}, C)$ for all $v \in V \backslash \left\lbrace \left<\text{eos}\right>\right\rbrace $. This means that $\left<\text{eos}\right>$ is always ranked last at each time step, so an incomplete decoding algorithm that considers at most $(|V|-1)$ most probable tokens at each time step from $p_{\theta }(y_t\,|\,y_{<t}, C)$ cannot decode $\left<\text{eos}\right>$ and thus always decodes an infinitely long sequence.
The log-probability of this infinitely long sequence $\hat{Y}$ is
For any $v\in V$,
where $b_v = \sum _{v^{\prime }\ne v} \exp (-\Vert u_{v^{\prime }}\Vert _1)$. The last inequality holds because $x/(x+b_v)$ is increasing in $x>0$. Therefore, the log-probability $\log p_{\theta }(\hat{Y}\,|\,C)$ diverges as $|\hat{Y}| \rightarrow \infty $, and thus $p_{\theta }(\hat{Y}\,|\,C) = 0$, which implies the decoding algorithm $\mathcal {F}$ is inconsistent by Lemma UNKREF25. Greedy decoding, beam search, top-$k$ sampling, and nucleus sampling are all inconsistent according to this theorem; there are consistent models $p_{\theta }$ that induce inconsistent distributions when paired with these decoding algorithms.
Fixing the inconsistency
In this section, we consider two ways to prevent inconsistency arising from incomplete decoding algorithms. First, we introduce consistent versions of top-$k$ and nucleus sampling. Second, we introduce the self-terminating recurrent language model, which is consistent when paired with any of the decoding algorithms considered in this paper.
Fixing the inconsistency ::: Consistent Sampling Algorithms
The proof of Theorem UNKREF27 suggests that inconsistency of incomplete decoding algorithms arises from the fact that $\left<\text{eos}\right>$ may be excluded indefinitely from the set of top-ranked tokens. We propose a simple modification to top-$k$ and nucleus sampling that forces $\left<\text{eos}\right>$ to be included at each step of decoding. First, we give a condition for when a particular model $p_{\theta }$ paired with a decoding algorithm $\mathcal {F}$ is consistent.
Theorem 4.1 Let $p_{\theta }$ be a consistent recurrent language model. If a decoding algorithm $\mathcal {F}$ satisfies $q_{\mathcal {F}}(\left<\text{eos}\right>|\,y_{<t}, C) \ge p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ for every prefix $y_{<t}$ and context $C$, then the decoding algorithm $\mathcal {F}$ is consistent with respect to the model $p_{\theta }$.
Let $P^{\prime }_{t-1}$ denote a set of all prefixes $y_{<t}$ of length $t-1$. For $t\ge 1$,
Taking the limit $t\rightarrow \infty $ and expectation over $C$ on both sides, we have
from which the decoding algorithm is consistent.
We define consistent variants of top-$k$ and nucleus sampling which satisfy this condition.
Definition 4.1 (Consistent top-$k$ sampling) Consistent top-$k$ sampling is top-$k$ sampling with the following modified proposal distribution:
where $V^{\prime } = \left\lbrace \left<\text{eos}\right>\right\rbrace \cup \underset{v^{\prime }}{\arg \text{top-k}}\ p_{\theta }(v^{\prime }\,|\,y_{<t}, C)$.
Definition 4.2 (Consistent nucleus sampling) Consistent nucleus sampling is nucleus sampling with the following modified proposal distribution:
The induced probability of $\left<\text{eos}\right>$ under these two algorithms is always equal to or larger than the model's probability. By Theorem UNKREF29, these algorithms are consistent with respect to any consistent recurrent language model.
Fixing the inconsistency ::: A Self-Terminating Recurrent Language Model
Although these consistent sampling algorithms can be used with any recurrent language model, their stochastic nature may not be suitable for finding a single, highly probable sequence. To avoid this limitation, we propose the self-terminating recurrent language model (STRLM).
Definition 4.3 (Self-terminating recurrent language model) A self-terminating recurrent language model computes the following conditional probability at each time step:
where
with $\sigma : \mathbb {R} \rightarrow [0,1-\epsilon ]$ and $\epsilon \in (0,1)$. $h_t$ is computed as in the original recurrent language model.
The underlying idea is that the probability of $\left<\text{eos}\right>$ increases monotonically. The model is consistent when paired with greedy decoding.
Theorem 4.2 Greedy decoding is consistent with respect to any self-terminating recurrent language model.
Let $p_{t}^{\left<\text{eos}\right>}$ denote $p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ and $a_{t}^{\left<\text{eos}\right>}$ denote $u_{\left<\text{eos}\right>}^\top h_t + c_{\left<\text{eos}\right>}$. By Definition UNKREF33 we have
Take $B=-\log 2 / \log (1-\epsilon )$. We then have $p_{t}^{\left<\text{eos}\right>}>1/2$ for all $t > B$, which implies that $\left<\text{eos}\right>$ is always the most probable token after time step $B$. Hence, the sequence length is less than $B$ with probability 1. Beam search is also consistent with respect to any self-terminating recurrent language model according to a similar argument; see Appendix for the proof.
Empirical Validation
The theoretical results rely on the existence of a model that results in inconsistency; it remains to be shown that inconsistency with respect to incomplete decoding occurs with recurrent language models encountered in practice. Moreover, while the proposed consistent sampling methods and self-terminating recurrent language model carry theoretical guarantees in terms of consistency, we must check whether they retain language modeling quality. To do so, we perform two experiments using a sequence completion task. In each experiment, we use the beginning of a sequence as context, then decode continuations from a trained recurrent language model and measure the proportion of non-terminated sequences in order to approximately measure inconsistency. The first experiment (§SECREF45) shows that inconsistency occurs in practice, and the second experiment (§SECREF47) shows the effectiveness of the proposed approaches.
Empirical Validation ::: Sequence completion.
We evaluate recurrent language models on a sequence completion task, which has previously been used to evaluate the effectiveness of sequence models, e.g. BIBREF20, BIBREF21, BIBREF2, BIBREF5, BIBREF10. Sequence completion is a general setting for studying the behavior of language models, encompassing machine translation BIBREF0, story generation BIBREF15, and dialogue modeling BIBREF1. The task consists of decoding a continuation $\hat{Y}\sim \mathcal {F}(p_{\theta }, C)$ given a length-$k$ prefix $C=(c_1,\ldots ,c_k)$, resulting in a completion $(c_1,\ldots ,c_k,\hat{y}_1\ldots ,\hat{y}_T)$.
Empirical Validation ::: Dataset.
We use the Wikitext2 dataset BIBREF17 consisting of paragraphs from Wikipedia, since it has frequently been used to evaluate language models BIBREF22, BIBREF23, BIBREF24. We split each paragraph into sentences using Spacy, resulting in roughly 100k sequences (78,274 train, 8,464 valid, 9,708 test). We split each sequence, using the first $k$ tokens as a context and the remaining tokens as a continuation. To ensure that each sequence contains a prefix, we prepend padding tokens to make it length $k$. Special $\left<\text{bos}\right>$ and $\left<\text{eos}\right>$ tokens are then inserted at the beginning and end of every sequence. Our experiments use $k=10$. We model sequences at the word level with a vocabulary size of 33,182. The average training sequence length is 24 tokens, with a maximum of 137.
Empirical Validation ::: Context distribution.
We define empirical context distributions with prefixes from the train, valid, and test sets,
where $\mathcal {D}=\lbrace (C^{(n)},Y^{(n)})\rbrace _{n=1}^{N}$ is a dataset split.
Empirical Validation ::: Evaluation metrics.
We use finite sequences to approximately measure the consistency of a model paired with a decoding algorithm, since decoding an infinite-length sequence is impossible. We use the proportion of decoded continuations that are longer than a predefined limit,
where $\hat{Y}^{(n)}\sim \mathcal {F}(p_{\theta }, C^{(n)})$ for each context $C^{(n)}$ in $\mathcal {D}$. We call $r_L$ the non-termination ratio of the decoding algorithm $\mathcal {F}$ for an underlying model and context distribution. A value of $r_L$ greater than zero means that some sequences did not terminate within $L$ steps. When $L$ is infinity, this implies that the model paired with the decoding algorithm is inconsistent. In practice, we use a finite $L$ that is substantially larger than the maximum training sequence length, and we interpret a non-zero $r_L$ as evidence that the model paired with the decoding algorithm is inconsistent. We use $L=1500$, which is more than 10 times the maximum training sequence length.
In each experiment, we report the mean and standard deviation of metrics across 10 independent initializations. Unless specified otherwise, we report metrics using the test context distribution, since the train, valid, and randomly generated context distributions had similar results.
Empirical Validation ::: Training.
We train recurrent language models for sequence completion with maximum likelihood, using the following loss on each sequence $Y=(c_1,\ldots ,c_k,y_1,\ldots ,y_T)$:
This amounts to running the full training sequence through a recurrent model and zeroing the loss for the first $k$ tokens, so that the first $k$ steps correspond to learning a $g_{\theta }$ that encodes the context. Each model is trained on a single Nvidia P40 GPU for up to 100 epochs, stopping early when validation perplexity does not decrease for 10 consecutive epochs.
Empirical Validation ::: Models.
We consider recurrent neural networks with hyperbolic tangent activations ($\tanh $-RNN) BIBREF11 and LSTM units (LSTM-RNN) BIBREF13. We perform an initial hyper-parameter sweep and select the best set of hyper-parameters for each of $\tanh $-RNN and LSTM-RNN based on the validation perplexities. With this best set of hyperparameters, we train each of these models with 10 different initializations. The choice of $\tanh $ and LSTM RNNs implies that all of the recurrent language models that we train are consistent according to Lemma UNKREF23. Our LSTM models achieve similar test perplexity ($91.86 \pm 0.4$) to those reported in previous work BIBREF24; see Appendix for further details.
Additionally, we train self-terminating $\tanh $-RNN and LSTM-RNN variants (Definition UNKREF33) at various values of $\epsilon $, which controls a lower bound on the termination probability at each step. We use $\sigma (x)=(1-\epsilon )\text{sigmoid}(x)$. We use the hyper-parameters selected in the preceding grid search.
Empirical Validation ::: Inconsistency of Recurrent Language Models
In this experiment, we demonstrate evidence of inconsistency with incomplete decoding methods (Theorem UNKREF27).
Table TABREF43 shows non-termination ratios for the recurrent language models using the incomplete decoding algorithms considered in this work, along with ancestral sampling. Decoding with ancestral sampling always resulted in sequences that terminated within $L$ steps, since the induced distribution is the same as that of the consistent model. On the other hand, the non-zero non-termination ratios for the incomplete decoding algorithms suggest inconsistency with respect to each algorithm, providing evidence for Theorem UNKREF27.
In particular, greedy search, beam search, and nucleus sampling yielded non-terminating sequences with both the $\tanh $ and LSTM RNNs. Using greedy decoding, roughly 6% of all contexts resulted in a non-terminating continuation with the $\tanh $-RNN, and roughly 1% with the LSTM-RNN. Nucleus sampling also produced non-terminating sequences with the $\tanh $-RNN (2.49%, nuc-0.2) and LSTM-RNN (0.76%, nuc-0.2), with the amount of non-termination decreasing as $\mu $ increased (see Definition UNKREF11), likely due to $\left<\text{eos}\right>$ having a higher chance of being included in $V_{\mu }$. Top-$k$ sampling resulted in non-terminating sequences with the $\tanh $-RNN, but not with the LSTM, implying that $\left<\text{eos}\right>$ was ranked within the top $k$ positions on at least one timestep during each decoding. Beam search produced non-terminating sequences with both the $\tanh $-RNN (beam-2,4) and LSTM-RNN (beam-2) models. This means that $\left<\text{eos}\right>$ was outside of the top tokens (determined by the beam width) considered at each step, since in our experiments we terminated the beam search when a single beam prefix contained $\left<\text{eos}\right>$. With the LSTM-RNN, a larger beam width (beam-4) prevented non-termination.
Empirical Validation ::: Consistency of the Proposed Methods
In this experiment, we evaluate the consistent variants of top-$k$ and nucleus sampling (§SECREF28) as well as the self-terminating recurrent language model (§SECREF32) in terms of consistency and language modeling quality.
Empirical Validation ::: Consistency of the Proposed Methods ::: Consistent sampling.
Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\left<\text{eos}\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.
Empirical Validation ::: Consistency of the Proposed Methods ::: Self-terminating RNN.
As seen in Table TABREF50, the self-terminating recurrent language models with $\epsilon \in \lbrace 10^{-2},10^{-3}\rbrace $ are consistent with respect to greedy decoding, at the expense of perplexity compared to the vanilla model. The value of $\epsilon $ from Definition UNKREF33, which controls a lower-bound on termination probability at each step, influences both $r_L$ and perplexity. When $\epsilon $ is too large ($\epsilon =10^{-2}$), perplexity degrades. When $\epsilon $ is too small ($\epsilon =10^{-4}$), the lower-bound grows slowly, so $\left<\text{eos}\right>$ is not guaranteed to be top-ranked within $L$ steps, and the metrics resemble the baseline's. An $\epsilon $ of $10^{-3}$ balanced consistency and language modeling quality, with a zero non-termination ratio and perplexity within 3 points of the baseline.
For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.
Future Directions
The methods we proposed in this paper have focused on how to resolve inconsistency from the viewpoint of decoding algorithms or model parameterization. Another approach is to address the issue of inconsistency in the learning phase.
One interesting direction is to investigate whether maximum likelihood learning is a cause of inconsistency. Given a training set $\left\lbrace (C^{(n)}, Y^{(n)}) \right\rbrace _{n=1}^N$ drawn from a data distribution, maximum likelihood learning solves:
where $\Omega (\theta )$ is a regularizer and $\lambda $ is a regularization weight.
Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding.
Conclusion
We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future.
Acknowledgements
We thank Chris Dyer, Noah Smith and Kevin Knight for valuable discussions. This work was supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). KC thanks eBay and NVIDIA for their support. | It eliminates non-termination in some models fixing for some models up to 6% of non-termination ratio. |
82fa2b99daa981fc42a882bb6db8481bdbbb9675 | 82fa2b99daa981fc42a882bb6db8481bdbbb9675_0 | Q: Is the problem of determining whether a given model would generate an infinite sequence is a decidable problem?
Text: Introduction
Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success, MLE-trained neural sequence models have been shown to exhibit issues such as length bias BIBREF3, BIBREF4 and degenerate repetition BIBREF5. These issues are suspected to be related to the maximum likelihood objective's local normalization, which results in a discrepancy between the learned model's distribution and the distribution induced by the decoding algorithm used to generate sequences BIBREF6, BIBREF7. This has prompted the development of alternative decoding methods BIBREF8, BIBREF5 and training objectives BIBREF9, BIBREF10. In this paper, we formalize and study this discrepancy between the model and the decoding algorithm.
We begin by formally defining recurrent neural language models, a family that encompasses neural models used in practice, such as recurrent neural networks BIBREF11, BIBREF12, BIBREF13, and transformers BIBREF14. Next, we formally define a decoding algorithm – a function that induces a distribution over sequences given a recurrent language model and a context distribution – which is used to obtain probable sequences from a model. In this paper, we show that the distribution induced by a decoding algorithm can contradict this intended use; instead, the decoding algorithm may return improbable, infinite-length sequences.
Our main finding is that a sequence which receives zero probability under a recurrent language model's distribution can receive nonzero probability under the distribution induced by a decoding algorithm. This occurs when the recurrent language model always ranks the sequence termination token outside of the set of tokens considered at each decoding step, yielding an infinite-length, zero probability sequence. This holds whenever the decoding algorithm is incomplete, in the sense that the algorithm excludes tokens from consideration at each step of decoding, which is the case for common methods such as greedy search, beam search, top-$k$ sampling BIBREF15, and nucleus sampling BIBREF5. We formalize our main finding using the notion of consistency BIBREF16 – whether a distribution assigns probability mass only to finite sequences – and prove that a consistent recurrent language model paired with an incomplete decoding algorithm can induce an inconsistent sequence distribution.
Based on the insight that inconsistency occurs due to the behavior of the termination token under incomplete decoding, we develop two methods for addressing inconsistency. First, we propose consistent sampling methods which guarantee that the termination token is not excluded from selection during decoding. Second, we introduce a self-terminating recurrent language model which ensures that the termination token is eventually ranked above all others, guaranteeing consistency under incomplete decoding.
To empirically measure inconsistency, we decode sequences from trained recurrent language models and measure the proportion of sequences with lengths far exceeding the maximum training sequence length. Our experiments on the Wikitext2 dataset BIBREF17 suggest that inconsistency occurs in practice when using incomplete decoding methods, while the proposed consistent sampling methods and self-terminating model parameterization prevent inconsistency and maintain language modeling quality.
The theoretical analysis reveals defects of existing decoding algorithms, providing a way to develop future models, inference procedures, and learning algorithms. We present methods related to sampling and model parameterization, but there are more directions which we leave to the future; we close with directions related to sequence-level learning.
Background
We begin our discussion by establishing background definitions. First, we define a sequence which is the main object of our investigation.
Definition 2.1 (Sequence) A sequence $Y$ is an ordered collection of items from a predefined finite vocabulary $V$. A sequence of finite length always ends with a special token $\left<\text{eos}\right>\in V$ that only appears at the end of a sequence.
Each model we consider generates a sequence conditioned on context information, such as a prefix in sentence completion. To consider this, we define a context distribution.
Definition 2.2 (Context distribution) A context distribution $p(C)$ is a probability distribution defined over a set $\mathcal {C}$. An element $C\in \mathcal {C}$ is called a context.
Background ::: Recurrent Language Models
A recurrent language model is an autoregressive model of a sequence distribution, where each conditional probability is parameterized with a neural network. Importantly, we assume that all tokens in a sequence are dependent on each other under a recurrent language model. This allows us to avoid cases in which the model degenerates to a Markovian language model, such as an $n$-gram model with a finite $n$.
Definition 2.3 (Recurrent language model) A recurrent language model $p_\theta $ is a neural network that computes the following conditional probability at each time step
where $h_t = f_{\theta }(y_t, h_{t-1})$ and $h_0 = g_{\theta }(C)$, and $u,c,\theta $ are parameters. A recurrent language model thereby computes the probability of a sequence $Y=(y_1, \ldots , y_T)$ by
where $y_{<t}=(y_1,\ldots ,y_{t-1})$. This distribution satisfies
Practical variants of the recurrent language model differ by the choice of transition function $f_{\theta }$ BIBREF11, BIBREF13, BIBREF12, BIBREF14. The use of softmax BIBREF18 implies that every unique token in the vocabulary is considered at every location of a sequence.
Remark 2.1 Under the conditional distribution of a recurrent language model, every token $v\in V$ is assigned a positive probability. This implies that $0 < p_\theta (v\,|\,y_{<t}, C) < 1.$ In addition, it follows that any finite sequence is probable by a recurrent language model under any context, i.e., $p_{\theta }(Y\,|\,C) > 0$ for any sequence $Y$ of finite length.
Background ::: Decoding Algorithms
Because it is intractable to decode the most probable sequence, it is necessary in practice to use an approximate decoding algorithm.
Definition 2.4 (Decoding algorithm) A decoding algorithm $\mathcal {F}(p_{\theta }, C)$ is a function that generates a sequence $\tilde{Y}$ given a recurrent language model $p_{\theta }$ and context $C$. Let $q_{\mathcal {F}}$ denote the distribution induced by the decoding algorithm $\mathcal {F}$.
We consider two families of decoding algorithms. In our analysis we only consider decoding algorithms that decode in a single pass, forward in time, without modifying previously selected tokens.
Background ::: Decoding Algorithms ::: Stochastic decoding.
The first family consists of stochastic algorithms. Among them, ancestral sampling is asymptotically unbiased and can be used for finding the most probable sequence, although it requires a substantial number of samples to achieve a low-variance estimate.
Definition 2.5 (Ancestral sampling) Ancestral sampling $\mathcal {F}_{\text{anc}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from $p_{\theta }(y_t\,|\,\tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$:
In order to avoid the high variance, two approximate stochastic decoding algorithms have recently been proposed and tested with recurrent language models. Top-$k$ sampling considers only a subset of the $k$ most probable tokens from the vocabulary at a time, while nucleus sampling considers only the minimal subset of most probable tokens whose total probability is higher than a predefined threshold.
Definition 2.6 (Top-$k$ sampling BIBREF15) Top-$k$ sampling $\mathcal {F}_{\text{top-k}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution:
Definition 2.7 (Nucleus sampling BIBREF5) Nucleus sampling $\mathcal {F}_{\text{nuc-}\mu }$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution. Let $v_1,\ldots ,v_{|V|}$ denote tokens in $V$ such that $p_{\theta }(v_i\,|\,y_{<t},C) \ge p_{\theta }(v_j\,|\,y_{<t},C)$ for all $i < j$, and define
where $V_{\mu } = \left\lbrace v_1, \cdots , v_{k_\mu } \right\rbrace $ with
Background ::: Decoding Algorithms ::: Deterministic decoding.
The other family consists of deterministic decoding algorithms, where a token is selected deterministically according to a rule at each decoding step. The most naive algorithm, called greedy decoding, simply takes the most probable token at each step.
Definition 2.8 (Greedy decoding) Greedy decoding $\mathcal {F}_{\text{greedy}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively selecting the most likely token from $p_{\theta }(y_t | \tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$:
In contrast to greedy decoding, beam search operates on the level of partial sequences or prefixes.
Definition 2.9 (Prefix) A prefix $\rho _t$ is an ordered collection of items from $V$. The score of a prefix is
where $\rho _t[\tau ]$ is a token at time $\tau $ from $\rho _t$.
Starting from a set of empty prefixes, at each iteration a new prefix set is formed by expanding each prefix, then choosing the highest scoring expanded prefixes.
Definition 2.10 (Beam search) Beam search with width $k$, $\mathcal {F}_{\text{beam}-k}$, generates a sequence from a recurrent language model $p_{\theta }$ by maintaining a size-$k$ prefix set $\mathrm {P}_t^{\text{top}}$. Starting with $P_0^{top}=\varnothing $, at each iteration $t\in \lbrace 1,2,\ldots \rbrace $ beam search forms a new prefix set $\mathrm {P}_t^{\text{top}}$ by expanding the current set, $\mathrm {P}_t = \bigcup _{\rho \in \mathrm {P}_{t-1}^{\text{top}}} \lbrace \rho \circ v\, |\, v\in V\rbrace $ (where $\rho \circ v$ is concatenation), then choosing the $k$ highest scoring elements,
Any $\rho \in \mathrm {P}_t^{\text{top}}$ ending with $\left<\text{eos}\right>$ is restricted from being expanded further, and is added to a set $S$. Beam search ends when $S$ contains $k$ sequences, and returns the highest scoring sequence in $S$.
Background ::: Decoding Algorithms ::: Incompleteness.
Other than ancestral sampling, the decoding algorithms above are incomplete in that they only consider a strict subset of the the full vocabulary $V$ at each time step, aside from the trivial case of $k=|V|$.
Definition 2.11 (Incomplete Decoding) A decoding algorithm $\mathcal {F}$ is incomplete when for each context $C$ and prefix $y_{<t}$, there is a strict subset $V^{\prime }_t\subsetneq V$ such that
Consistency of a Decoding Algorithm ::: Definition of consistency.
A recurrent language model $p_{\theta }$ may assign a positive probability to an infinitely long sequence, in which case we call the model inconsistent. This notion of consistency was raised and analyzed earlier, for instance by BIBREF19 and BIBREF16, in terms of whether the distribution induced by $p_{\theta }$ is concentrated on finite sequences. We extend their definition to account for the context $C$.
Definition 3.1 (Consistency of a recurrent language model) A recurrent language model is consistent under a context distribution $p(C)$ if $p_{\theta }(|Y|=\infty ) = 0$. Otherwise, the recurrent language model is said to be inconsistent.
Any sequence decoded from a consistent model for a given probable context is guaranteed to terminate.
Lemma 3.1 If a recurrent language model $p_{\theta }$ is consistent, $p_{\theta }(|Y|=\infty \,|\,C)=0$ for any probable context $C$.
Next, we establish a practical condition under which a recurrent language model is consistent.
Lemma 3.2 A recurrent language model $p_{\theta }$ is consistent if $\Vert h_t\Vert _p$ is uniformly bounded for some $p\ge 1$.
[Proof sketch] If $\Vert h_t\Vert _p$ is bounded, then each $u_v^\top h_t$ is bounded, hence $p_{\theta }(\left<\text{eos}\right>| y_{<t}, C)>\xi >0$ for a constant $\xi $. Thus $p_{\theta }(|Y|=\infty ) \le \lim _{t\rightarrow \infty } (1 - \xi )^t = 0$, meaning that $p_{\theta }$ is consistent.
Although this condition is practical because layer normalization or bounded activation functions BIBREF11, BIBREF12, BIBREF14 result in bounded $h_t$, we show that even if a recurrent language model is consistent, a decoding algorithm may produce an infinite-length sequence. We formalize this discrepancy using the consistency of a decoding algorithm.
Definition 3.2 (Consistency of a decoding algorithm) A decoding algorithm $\mathcal {F}$ is consistent with respect to a consistent recurrent language model $p_{\theta }$ under a context distribution $p(C)$ if the decoding algorithm $\mathcal {F}$ preserves the consistency of the model $p_{\theta }$, that is, $q_{\mathcal {F}}(|Y|=\infty )=0$.
When a consistent recurrent language model $p_{\theta }$ and a decoding algorithm $\mathcal {F}$ induce a consistent distribution $q_{\mathcal {F}}$, we say that $p_{\theta }$ paired with $\mathcal {F}$ is consistent. For instance, any consistent recurrent language model paired with ancestral sampling is consistent, because the induced distribution $q_{\mathcal {F}_{\text{anc}}}$ is the same as the distribution of the original model. We also have an analogue of Lemma UNKREF21.
Lemma 3.3 A consistent decoding algorithm with respect to a consistent recurrent language model decodes only probable sequences. That is, if $q_{\mathcal {F}}(Y\,|\,C)>0$, then $p_{\theta }(Y\,|\,C)>0$ for any probable context $C$.
Consistency of a Decoding Algorithm ::: Inconsistency of incomplete decoding.
Any incomplete decoding algorithm (Definition UNKREF18) can be inconsistent regardless of the context distribution, because there is a recurrent language model that places $\left<\text{eos}\right>$ outside of $V^{\prime }_t$ at every step of decoding. To show this, we construct a consistent recurrent language model whose distribution induced by an incomplete decoding algorithm is inconsistent.
Theorem 3.4 (Inconsistency of an incomplete decoding algorithm) There exists a consistent recurrent language model $p_{\theta }$ from which an incomplete decoding algorithm $\mathcal {F}$, that considers only up to $(|V|-1)$-most likely tokens according to $p_{\theta }(y_t\,|\,y_{<t},C)$ at each step $t$, finds a sequence $\tilde{Y}$ whose probability under $p_{\theta }$ is 0 for any context distribution.
We prove this theorem by constructing a $\tanh $ recurrent network. We define the recurrent function $f_{\theta }$ as
where $e(y_{t}) \in \mathbb {R}^{|V|}$ is a one-hot representation of $y_t$, $W_h \in \mathbb {R}^{d \times d}$ where every entry is positive, and $I$ is an identity matrix of size $|V| \times |V|$. $h_0 = g_{\theta }(C)$ is constructed to consist of positive values only. Because each element of $|h_t|$ is bounded by 1, the constructed recurrent language model $p_{\theta }$ is consistent by Lemma UNKREF23.
For $v \ne \left<\text{eos}\right>$, we set $u_v$ (see Definition UNKREF4) to be
where all elements of $\bar{u}_v$ are positive and $e(v)$ is a one-hot representation of $v$. $c_v$ is set to zero. Next, let
where all elements of $\bar{u}_{\left<\text{eos}\right>}$ are negative.
This defines a valid recurrent language model (Definition UNKREF4), since the conditional distribution at each time $t$ is influenced by all the previous tokens. More specifically, the logit of a token $v$ depends on $\sum _{t^{\prime }=1}^t {1}(y_{t^{\prime }} = v)$, where 1 is an indicator function.
This recurrent language model always outputs positive logits for non-$\left<\text{eos}\right>$ tokens, and outputs negative logits for the $\left<\text{eos}\right>$ token. This implies $p(\left<\text{eos}\right>|\,y_{<t}, C) < p(v\,|\,y_{<t}, C)$ for all $v \in V \backslash \left\lbrace \left<\text{eos}\right>\right\rbrace $. This means that $\left<\text{eos}\right>$ is always ranked last at each time step, so an incomplete decoding algorithm that considers at most $(|V|-1)$ most probable tokens at each time step from $p_{\theta }(y_t\,|\,y_{<t}, C)$ cannot decode $\left<\text{eos}\right>$ and thus always decodes an infinitely long sequence.
The log-probability of this infinitely long sequence $\hat{Y}$ is
For any $v\in V$,
where $b_v = \sum _{v^{\prime }\ne v} \exp (-\Vert u_{v^{\prime }}\Vert _1)$. The last inequality holds because $x/(x+b_v)$ is increasing in $x>0$. Therefore, the log-probability $\log p_{\theta }(\hat{Y}\,|\,C)$ diverges as $|\hat{Y}| \rightarrow \infty $, and thus $p_{\theta }(\hat{Y}\,|\,C) = 0$, which implies the decoding algorithm $\mathcal {F}$ is inconsistent by Lemma UNKREF25. Greedy decoding, beam search, top-$k$ sampling, and nucleus sampling are all inconsistent according to this theorem; there are consistent models $p_{\theta }$ that induce inconsistent distributions when paired with these decoding algorithms.
Fixing the inconsistency
In this section, we consider two ways to prevent inconsistency arising from incomplete decoding algorithms. First, we introduce consistent versions of top-$k$ and nucleus sampling. Second, we introduce the self-terminating recurrent language model, which is consistent when paired with any of the decoding algorithms considered in this paper.
Fixing the inconsistency ::: Consistent Sampling Algorithms
The proof of Theorem UNKREF27 suggests that inconsistency of incomplete decoding algorithms arises from the fact that $\left<\text{eos}\right>$ may be excluded indefinitely from the set of top-ranked tokens. We propose a simple modification to top-$k$ and nucleus sampling that forces $\left<\text{eos}\right>$ to be included at each step of decoding. First, we give a condition for when a particular model $p_{\theta }$ paired with a decoding algorithm $\mathcal {F}$ is consistent.
Theorem 4.1 Let $p_{\theta }$ be a consistent recurrent language model. If a decoding algorithm $\mathcal {F}$ satisfies $q_{\mathcal {F}}(\left<\text{eos}\right>|\,y_{<t}, C) \ge p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ for every prefix $y_{<t}$ and context $C$, then the decoding algorithm $\mathcal {F}$ is consistent with respect to the model $p_{\theta }$.
Let $P^{\prime }_{t-1}$ denote a set of all prefixes $y_{<t}$ of length $t-1$. For $t\ge 1$,
Taking the limit $t\rightarrow \infty $ and expectation over $C$ on both sides, we have
from which the decoding algorithm is consistent.
We define consistent variants of top-$k$ and nucleus sampling which satisfy this condition.
Definition 4.1 (Consistent top-$k$ sampling) Consistent top-$k$ sampling is top-$k$ sampling with the following modified proposal distribution:
where $V^{\prime } = \left\lbrace \left<\text{eos}\right>\right\rbrace \cup \underset{v^{\prime }}{\arg \text{top-k}}\ p_{\theta }(v^{\prime }\,|\,y_{<t}, C)$.
Definition 4.2 (Consistent nucleus sampling) Consistent nucleus sampling is nucleus sampling with the following modified proposal distribution:
The induced probability of $\left<\text{eos}\right>$ under these two algorithms is always equal to or larger than the model's probability. By Theorem UNKREF29, these algorithms are consistent with respect to any consistent recurrent language model.
Fixing the inconsistency ::: A Self-Terminating Recurrent Language Model
Although these consistent sampling algorithms can be used with any recurrent language model, their stochastic nature may not be suitable for finding a single, highly probable sequence. To avoid this limitation, we propose the self-terminating recurrent language model (STRLM).
Definition 4.3 (Self-terminating recurrent language model) A self-terminating recurrent language model computes the following conditional probability at each time step:
where
with $\sigma : \mathbb {R} \rightarrow [0,1-\epsilon ]$ and $\epsilon \in (0,1)$. $h_t$ is computed as in the original recurrent language model.
The underlying idea is that the probability of $\left<\text{eos}\right>$ increases monotonically. The model is consistent when paired with greedy decoding.
Theorem 4.2 Greedy decoding is consistent with respect to any self-terminating recurrent language model.
Let $p_{t}^{\left<\text{eos}\right>}$ denote $p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ and $a_{t}^{\left<\text{eos}\right>}$ denote $u_{\left<\text{eos}\right>}^\top h_t + c_{\left<\text{eos}\right>}$. By Definition UNKREF33 we have
Take $B=-\log 2 / \log (1-\epsilon )$. We then have $p_{t}^{\left<\text{eos}\right>}>1/2$ for all $t > B$, which implies that $\left<\text{eos}\right>$ is always the most probable token after time step $B$. Hence, the sequence length is less than $B$ with probability 1. Beam search is also consistent with respect to any self-terminating recurrent language model according to a similar argument; see Appendix for the proof.
Empirical Validation
The theoretical results rely on the existence of a model that results in inconsistency; it remains to be shown that inconsistency with respect to incomplete decoding occurs with recurrent language models encountered in practice. Moreover, while the proposed consistent sampling methods and self-terminating recurrent language model carry theoretical guarantees in terms of consistency, we must check whether they retain language modeling quality. To do so, we perform two experiments using a sequence completion task. In each experiment, we use the beginning of a sequence as context, then decode continuations from a trained recurrent language model and measure the proportion of non-terminated sequences in order to approximately measure inconsistency. The first experiment (§SECREF45) shows that inconsistency occurs in practice, and the second experiment (§SECREF47) shows the effectiveness of the proposed approaches.
Empirical Validation ::: Sequence completion.
We evaluate recurrent language models on a sequence completion task, which has previously been used to evaluate the effectiveness of sequence models, e.g. BIBREF20, BIBREF21, BIBREF2, BIBREF5, BIBREF10. Sequence completion is a general setting for studying the behavior of language models, encompassing machine translation BIBREF0, story generation BIBREF15, and dialogue modeling BIBREF1. The task consists of decoding a continuation $\hat{Y}\sim \mathcal {F}(p_{\theta }, C)$ given a length-$k$ prefix $C=(c_1,\ldots ,c_k)$, resulting in a completion $(c_1,\ldots ,c_k,\hat{y}_1\ldots ,\hat{y}_T)$.
Empirical Validation ::: Dataset.
We use the Wikitext2 dataset BIBREF17 consisting of paragraphs from Wikipedia, since it has frequently been used to evaluate language models BIBREF22, BIBREF23, BIBREF24. We split each paragraph into sentences using Spacy, resulting in roughly 100k sequences (78,274 train, 8,464 valid, 9,708 test). We split each sequence, using the first $k$ tokens as a context and the remaining tokens as a continuation. To ensure that each sequence contains a prefix, we prepend padding tokens to make it length $k$. Special $\left<\text{bos}\right>$ and $\left<\text{eos}\right>$ tokens are then inserted at the beginning and end of every sequence. Our experiments use $k=10$. We model sequences at the word level with a vocabulary size of 33,182. The average training sequence length is 24 tokens, with a maximum of 137.
Empirical Validation ::: Context distribution.
We define empirical context distributions with prefixes from the train, valid, and test sets,
where $\mathcal {D}=\lbrace (C^{(n)},Y^{(n)})\rbrace _{n=1}^{N}$ is a dataset split.
Empirical Validation ::: Evaluation metrics.
We use finite sequences to approximately measure the consistency of a model paired with a decoding algorithm, since decoding an infinite-length sequence is impossible. We use the proportion of decoded continuations that are longer than a predefined limit,
where $\hat{Y}^{(n)}\sim \mathcal {F}(p_{\theta }, C^{(n)})$ for each context $C^{(n)}$ in $\mathcal {D}$. We call $r_L$ the non-termination ratio of the decoding algorithm $\mathcal {F}$ for an underlying model and context distribution. A value of $r_L$ greater than zero means that some sequences did not terminate within $L$ steps. When $L$ is infinity, this implies that the model paired with the decoding algorithm is inconsistent. In practice, we use a finite $L$ that is substantially larger than the maximum training sequence length, and we interpret a non-zero $r_L$ as evidence that the model paired with the decoding algorithm is inconsistent. We use $L=1500$, which is more than 10 times the maximum training sequence length.
In each experiment, we report the mean and standard deviation of metrics across 10 independent initializations. Unless specified otherwise, we report metrics using the test context distribution, since the train, valid, and randomly generated context distributions had similar results.
Empirical Validation ::: Training.
We train recurrent language models for sequence completion with maximum likelihood, using the following loss on each sequence $Y=(c_1,\ldots ,c_k,y_1,\ldots ,y_T)$:
This amounts to running the full training sequence through a recurrent model and zeroing the loss for the first $k$ tokens, so that the first $k$ steps correspond to learning a $g_{\theta }$ that encodes the context. Each model is trained on a single Nvidia P40 GPU for up to 100 epochs, stopping early when validation perplexity does not decrease for 10 consecutive epochs.
Empirical Validation ::: Models.
We consider recurrent neural networks with hyperbolic tangent activations ($\tanh $-RNN) BIBREF11 and LSTM units (LSTM-RNN) BIBREF13. We perform an initial hyper-parameter sweep and select the best set of hyper-parameters for each of $\tanh $-RNN and LSTM-RNN based on the validation perplexities. With this best set of hyperparameters, we train each of these models with 10 different initializations. The choice of $\tanh $ and LSTM RNNs implies that all of the recurrent language models that we train are consistent according to Lemma UNKREF23. Our LSTM models achieve similar test perplexity ($91.86 \pm 0.4$) to those reported in previous work BIBREF24; see Appendix for further details.
Additionally, we train self-terminating $\tanh $-RNN and LSTM-RNN variants (Definition UNKREF33) at various values of $\epsilon $, which controls a lower bound on the termination probability at each step. We use $\sigma (x)=(1-\epsilon )\text{sigmoid}(x)$. We use the hyper-parameters selected in the preceding grid search.
Empirical Validation ::: Inconsistency of Recurrent Language Models
In this experiment, we demonstrate evidence of inconsistency with incomplete decoding methods (Theorem UNKREF27).
Table TABREF43 shows non-termination ratios for the recurrent language models using the incomplete decoding algorithms considered in this work, along with ancestral sampling. Decoding with ancestral sampling always resulted in sequences that terminated within $L$ steps, since the induced distribution is the same as that of the consistent model. On the other hand, the non-zero non-termination ratios for the incomplete decoding algorithms suggest inconsistency with respect to each algorithm, providing evidence for Theorem UNKREF27.
In particular, greedy search, beam search, and nucleus sampling yielded non-terminating sequences with both the $\tanh $ and LSTM RNNs. Using greedy decoding, roughly 6% of all contexts resulted in a non-terminating continuation with the $\tanh $-RNN, and roughly 1% with the LSTM-RNN. Nucleus sampling also produced non-terminating sequences with the $\tanh $-RNN (2.49%, nuc-0.2) and LSTM-RNN (0.76%, nuc-0.2), with the amount of non-termination decreasing as $\mu $ increased (see Definition UNKREF11), likely due to $\left<\text{eos}\right>$ having a higher chance of being included in $V_{\mu }$. Top-$k$ sampling resulted in non-terminating sequences with the $\tanh $-RNN, but not with the LSTM, implying that $\left<\text{eos}\right>$ was ranked within the top $k$ positions on at least one timestep during each decoding. Beam search produced non-terminating sequences with both the $\tanh $-RNN (beam-2,4) and LSTM-RNN (beam-2) models. This means that $\left<\text{eos}\right>$ was outside of the top tokens (determined by the beam width) considered at each step, since in our experiments we terminated the beam search when a single beam prefix contained $\left<\text{eos}\right>$. With the LSTM-RNN, a larger beam width (beam-4) prevented non-termination.
Empirical Validation ::: Consistency of the Proposed Methods
In this experiment, we evaluate the consistent variants of top-$k$ and nucleus sampling (§SECREF28) as well as the self-terminating recurrent language model (§SECREF32) in terms of consistency and language modeling quality.
Empirical Validation ::: Consistency of the Proposed Methods ::: Consistent sampling.
Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\left<\text{eos}\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.
Empirical Validation ::: Consistency of the Proposed Methods ::: Self-terminating RNN.
As seen in Table TABREF50, the self-terminating recurrent language models with $\epsilon \in \lbrace 10^{-2},10^{-3}\rbrace $ are consistent with respect to greedy decoding, at the expense of perplexity compared to the vanilla model. The value of $\epsilon $ from Definition UNKREF33, which controls a lower-bound on termination probability at each step, influences both $r_L$ and perplexity. When $\epsilon $ is too large ($\epsilon =10^{-2}$), perplexity degrades. When $\epsilon $ is too small ($\epsilon =10^{-4}$), the lower-bound grows slowly, so $\left<\text{eos}\right>$ is not guaranteed to be top-ranked within $L$ steps, and the metrics resemble the baseline's. An $\epsilon $ of $10^{-3}$ balanced consistency and language modeling quality, with a zero non-termination ratio and perplexity within 3 points of the baseline.
For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.
Future Directions
The methods we proposed in this paper have focused on how to resolve inconsistency from the viewpoint of decoding algorithms or model parameterization. Another approach is to address the issue of inconsistency in the learning phase.
One interesting direction is to investigate whether maximum likelihood learning is a cause of inconsistency. Given a training set $\left\lbrace (C^{(n)}, Y^{(n)}) \right\rbrace _{n=1}^N$ drawn from a data distribution, maximum likelihood learning solves:
where $\Omega (\theta )$ is a regularizer and $\lambda $ is a regularization weight.
Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding.
Conclusion
We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future.
Acknowledgements
We thank Chris Dyer, Noah Smith and Kevin Knight for valuable discussions. This work was supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). KC thanks eBay and NVIDIA for their support. | Unanswerable |
61fb982b2c67541725d6db76b9c710dd169b533d | 61fb982b2c67541725d6db76b9c710dd169b533d_0 | Q: Is infinite-length sequence generation a result of training with maximum likelihood?
Text: Introduction
Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success, MLE-trained neural sequence models have been shown to exhibit issues such as length bias BIBREF3, BIBREF4 and degenerate repetition BIBREF5. These issues are suspected to be related to the maximum likelihood objective's local normalization, which results in a discrepancy between the learned model's distribution and the distribution induced by the decoding algorithm used to generate sequences BIBREF6, BIBREF7. This has prompted the development of alternative decoding methods BIBREF8, BIBREF5 and training objectives BIBREF9, BIBREF10. In this paper, we formalize and study this discrepancy between the model and the decoding algorithm.
We begin by formally defining recurrent neural language models, a family that encompasses neural models used in practice, such as recurrent neural networks BIBREF11, BIBREF12, BIBREF13, and transformers BIBREF14. Next, we formally define a decoding algorithm – a function that induces a distribution over sequences given a recurrent language model and a context distribution – which is used to obtain probable sequences from a model. In this paper, we show that the distribution induced by a decoding algorithm can contradict this intended use; instead, the decoding algorithm may return improbable, infinite-length sequences.
Our main finding is that a sequence which receives zero probability under a recurrent language model's distribution can receive nonzero probability under the distribution induced by a decoding algorithm. This occurs when the recurrent language model always ranks the sequence termination token outside of the set of tokens considered at each decoding step, yielding an infinite-length, zero probability sequence. This holds whenever the decoding algorithm is incomplete, in the sense that the algorithm excludes tokens from consideration at each step of decoding, which is the case for common methods such as greedy search, beam search, top-$k$ sampling BIBREF15, and nucleus sampling BIBREF5. We formalize our main finding using the notion of consistency BIBREF16 – whether a distribution assigns probability mass only to finite sequences – and prove that a consistent recurrent language model paired with an incomplete decoding algorithm can induce an inconsistent sequence distribution.
Based on the insight that inconsistency occurs due to the behavior of the termination token under incomplete decoding, we develop two methods for addressing inconsistency. First, we propose consistent sampling methods which guarantee that the termination token is not excluded from selection during decoding. Second, we introduce a self-terminating recurrent language model which ensures that the termination token is eventually ranked above all others, guaranteeing consistency under incomplete decoding.
To empirically measure inconsistency, we decode sequences from trained recurrent language models and measure the proportion of sequences with lengths far exceeding the maximum training sequence length. Our experiments on the Wikitext2 dataset BIBREF17 suggest that inconsistency occurs in practice when using incomplete decoding methods, while the proposed consistent sampling methods and self-terminating model parameterization prevent inconsistency and maintain language modeling quality.
The theoretical analysis reveals defects of existing decoding algorithms, providing a way to develop future models, inference procedures, and learning algorithms. We present methods related to sampling and model parameterization, but there are more directions which we leave to the future; we close with directions related to sequence-level learning.
Background
We begin our discussion by establishing background definitions. First, we define a sequence which is the main object of our investigation.
Definition 2.1 (Sequence) A sequence $Y$ is an ordered collection of items from a predefined finite vocabulary $V$. A sequence of finite length always ends with a special token $\left<\text{eos}\right>\in V$ that only appears at the end of a sequence.
Each model we consider generates a sequence conditioned on context information, such as a prefix in sentence completion. To consider this, we define a context distribution.
Definition 2.2 (Context distribution) A context distribution $p(C)$ is a probability distribution defined over a set $\mathcal {C}$. An element $C\in \mathcal {C}$ is called a context.
Background ::: Recurrent Language Models
A recurrent language model is an autoregressive model of a sequence distribution, where each conditional probability is parameterized with a neural network. Importantly, we assume that all tokens in a sequence are dependent on each other under a recurrent language model. This allows us to avoid cases in which the model degenerates to a Markovian language model, such as an $n$-gram model with a finite $n$.
Definition 2.3 (Recurrent language model) A recurrent language model $p_\theta $ is a neural network that computes the following conditional probability at each time step
where $h_t = f_{\theta }(y_t, h_{t-1})$ and $h_0 = g_{\theta }(C)$, and $u,c,\theta $ are parameters. A recurrent language model thereby computes the probability of a sequence $Y=(y_1, \ldots , y_T)$ by
where $y_{<t}=(y_1,\ldots ,y_{t-1})$. This distribution satisfies
Practical variants of the recurrent language model differ by the choice of transition function $f_{\theta }$ BIBREF11, BIBREF13, BIBREF12, BIBREF14. The use of softmax BIBREF18 implies that every unique token in the vocabulary is considered at every location of a sequence.
Remark 2.1 Under the conditional distribution of a recurrent language model, every token $v\in V$ is assigned a positive probability. This implies that $0 < p_\theta (v\,|\,y_{<t}, C) < 1.$ In addition, it follows that any finite sequence is probable by a recurrent language model under any context, i.e., $p_{\theta }(Y\,|\,C) > 0$ for any sequence $Y$ of finite length.
Background ::: Decoding Algorithms
Because it is intractable to decode the most probable sequence, it is necessary in practice to use an approximate decoding algorithm.
Definition 2.4 (Decoding algorithm) A decoding algorithm $\mathcal {F}(p_{\theta }, C)$ is a function that generates a sequence $\tilde{Y}$ given a recurrent language model $p_{\theta }$ and context $C$. Let $q_{\mathcal {F}}$ denote the distribution induced by the decoding algorithm $\mathcal {F}$.
We consider two families of decoding algorithms. In our analysis we only consider decoding algorithms that decode in a single pass, forward in time, without modifying previously selected tokens.
Background ::: Decoding Algorithms ::: Stochastic decoding.
The first family consists of stochastic algorithms. Among them, ancestral sampling is asymptotically unbiased and can be used for finding the most probable sequence, although it requires a substantial number of samples to achieve a low-variance estimate.
Definition 2.5 (Ancestral sampling) Ancestral sampling $\mathcal {F}_{\text{anc}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from $p_{\theta }(y_t\,|\,\tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$:
In order to avoid the high variance, two approximate stochastic decoding algorithms have recently been proposed and tested with recurrent language models. Top-$k$ sampling considers only a subset of the $k$ most probable tokens from the vocabulary at a time, while nucleus sampling considers only the minimal subset of most probable tokens whose total probability is higher than a predefined threshold.
Definition 2.6 (Top-$k$ sampling BIBREF15) Top-$k$ sampling $\mathcal {F}_{\text{top-k}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution:
Definition 2.7 (Nucleus sampling BIBREF5) Nucleus sampling $\mathcal {F}_{\text{nuc-}\mu }$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution. Let $v_1,\ldots ,v_{|V|}$ denote tokens in $V$ such that $p_{\theta }(v_i\,|\,y_{<t},C) \ge p_{\theta }(v_j\,|\,y_{<t},C)$ for all $i < j$, and define
where $V_{\mu } = \left\lbrace v_1, \cdots , v_{k_\mu } \right\rbrace $ with
Background ::: Decoding Algorithms ::: Deterministic decoding.
The other family consists of deterministic decoding algorithms, where a token is selected deterministically according to a rule at each decoding step. The most naive algorithm, called greedy decoding, simply takes the most probable token at each step.
Definition 2.8 (Greedy decoding) Greedy decoding $\mathcal {F}_{\text{greedy}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively selecting the most likely token from $p_{\theta }(y_t | \tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$:
In contrast to greedy decoding, beam search operates on the level of partial sequences or prefixes.
Definition 2.9 (Prefix) A prefix $\rho _t$ is an ordered collection of items from $V$. The score of a prefix is
where $\rho _t[\tau ]$ is a token at time $\tau $ from $\rho _t$.
Starting from a set of empty prefixes, at each iteration a new prefix set is formed by expanding each prefix, then choosing the highest scoring expanded prefixes.
Definition 2.10 (Beam search) Beam search with width $k$, $\mathcal {F}_{\text{beam}-k}$, generates a sequence from a recurrent language model $p_{\theta }$ by maintaining a size-$k$ prefix set $\mathrm {P}_t^{\text{top}}$. Starting with $P_0^{top}=\varnothing $, at each iteration $t\in \lbrace 1,2,\ldots \rbrace $ beam search forms a new prefix set $\mathrm {P}_t^{\text{top}}$ by expanding the current set, $\mathrm {P}_t = \bigcup _{\rho \in \mathrm {P}_{t-1}^{\text{top}}} \lbrace \rho \circ v\, |\, v\in V\rbrace $ (where $\rho \circ v$ is concatenation), then choosing the $k$ highest scoring elements,
Any $\rho \in \mathrm {P}_t^{\text{top}}$ ending with $\left<\text{eos}\right>$ is restricted from being expanded further, and is added to a set $S$. Beam search ends when $S$ contains $k$ sequences, and returns the highest scoring sequence in $S$.
Background ::: Decoding Algorithms ::: Incompleteness.
Other than ancestral sampling, the decoding algorithms above are incomplete in that they only consider a strict subset of the the full vocabulary $V$ at each time step, aside from the trivial case of $k=|V|$.
Definition 2.11 (Incomplete Decoding) A decoding algorithm $\mathcal {F}$ is incomplete when for each context $C$ and prefix $y_{<t}$, there is a strict subset $V^{\prime }_t\subsetneq V$ such that
Consistency of a Decoding Algorithm ::: Definition of consistency.
A recurrent language model $p_{\theta }$ may assign a positive probability to an infinitely long sequence, in which case we call the model inconsistent. This notion of consistency was raised and analyzed earlier, for instance by BIBREF19 and BIBREF16, in terms of whether the distribution induced by $p_{\theta }$ is concentrated on finite sequences. We extend their definition to account for the context $C$.
Definition 3.1 (Consistency of a recurrent language model) A recurrent language model is consistent under a context distribution $p(C)$ if $p_{\theta }(|Y|=\infty ) = 0$. Otherwise, the recurrent language model is said to be inconsistent.
Any sequence decoded from a consistent model for a given probable context is guaranteed to terminate.
Lemma 3.1 If a recurrent language model $p_{\theta }$ is consistent, $p_{\theta }(|Y|=\infty \,|\,C)=0$ for any probable context $C$.
Next, we establish a practical condition under which a recurrent language model is consistent.
Lemma 3.2 A recurrent language model $p_{\theta }$ is consistent if $\Vert h_t\Vert _p$ is uniformly bounded for some $p\ge 1$.
[Proof sketch] If $\Vert h_t\Vert _p$ is bounded, then each $u_v^\top h_t$ is bounded, hence $p_{\theta }(\left<\text{eos}\right>| y_{<t}, C)>\xi >0$ for a constant $\xi $. Thus $p_{\theta }(|Y|=\infty ) \le \lim _{t\rightarrow \infty } (1 - \xi )^t = 0$, meaning that $p_{\theta }$ is consistent.
Although this condition is practical because layer normalization or bounded activation functions BIBREF11, BIBREF12, BIBREF14 result in bounded $h_t$, we show that even if a recurrent language model is consistent, a decoding algorithm may produce an infinite-length sequence. We formalize this discrepancy using the consistency of a decoding algorithm.
Definition 3.2 (Consistency of a decoding algorithm) A decoding algorithm $\mathcal {F}$ is consistent with respect to a consistent recurrent language model $p_{\theta }$ under a context distribution $p(C)$ if the decoding algorithm $\mathcal {F}$ preserves the consistency of the model $p_{\theta }$, that is, $q_{\mathcal {F}}(|Y|=\infty )=0$.
When a consistent recurrent language model $p_{\theta }$ and a decoding algorithm $\mathcal {F}$ induce a consistent distribution $q_{\mathcal {F}}$, we say that $p_{\theta }$ paired with $\mathcal {F}$ is consistent. For instance, any consistent recurrent language model paired with ancestral sampling is consistent, because the induced distribution $q_{\mathcal {F}_{\text{anc}}}$ is the same as the distribution of the original model. We also have an analogue of Lemma UNKREF21.
Lemma 3.3 A consistent decoding algorithm with respect to a consistent recurrent language model decodes only probable sequences. That is, if $q_{\mathcal {F}}(Y\,|\,C)>0$, then $p_{\theta }(Y\,|\,C)>0$ for any probable context $C$.
Consistency of a Decoding Algorithm ::: Inconsistency of incomplete decoding.
Any incomplete decoding algorithm (Definition UNKREF18) can be inconsistent regardless of the context distribution, because there is a recurrent language model that places $\left<\text{eos}\right>$ outside of $V^{\prime }_t$ at every step of decoding. To show this, we construct a consistent recurrent language model whose distribution induced by an incomplete decoding algorithm is inconsistent.
Theorem 3.4 (Inconsistency of an incomplete decoding algorithm) There exists a consistent recurrent language model $p_{\theta }$ from which an incomplete decoding algorithm $\mathcal {F}$, that considers only up to $(|V|-1)$-most likely tokens according to $p_{\theta }(y_t\,|\,y_{<t},C)$ at each step $t$, finds a sequence $\tilde{Y}$ whose probability under $p_{\theta }$ is 0 for any context distribution.
We prove this theorem by constructing a $\tanh $ recurrent network. We define the recurrent function $f_{\theta }$ as
where $e(y_{t}) \in \mathbb {R}^{|V|}$ is a one-hot representation of $y_t$, $W_h \in \mathbb {R}^{d \times d}$ where every entry is positive, and $I$ is an identity matrix of size $|V| \times |V|$. $h_0 = g_{\theta }(C)$ is constructed to consist of positive values only. Because each element of $|h_t|$ is bounded by 1, the constructed recurrent language model $p_{\theta }$ is consistent by Lemma UNKREF23.
For $v \ne \left<\text{eos}\right>$, we set $u_v$ (see Definition UNKREF4) to be
where all elements of $\bar{u}_v$ are positive and $e(v)$ is a one-hot representation of $v$. $c_v$ is set to zero. Next, let
where all elements of $\bar{u}_{\left<\text{eos}\right>}$ are negative.
This defines a valid recurrent language model (Definition UNKREF4), since the conditional distribution at each time $t$ is influenced by all the previous tokens. More specifically, the logit of a token $v$ depends on $\sum _{t^{\prime }=1}^t {1}(y_{t^{\prime }} = v)$, where 1 is an indicator function.
This recurrent language model always outputs positive logits for non-$\left<\text{eos}\right>$ tokens, and outputs negative logits for the $\left<\text{eos}\right>$ token. This implies $p(\left<\text{eos}\right>|\,y_{<t}, C) < p(v\,|\,y_{<t}, C)$ for all $v \in V \backslash \left\lbrace \left<\text{eos}\right>\right\rbrace $. This means that $\left<\text{eos}\right>$ is always ranked last at each time step, so an incomplete decoding algorithm that considers at most $(|V|-1)$ most probable tokens at each time step from $p_{\theta }(y_t\,|\,y_{<t}, C)$ cannot decode $\left<\text{eos}\right>$ and thus always decodes an infinitely long sequence.
The log-probability of this infinitely long sequence $\hat{Y}$ is
For any $v\in V$,
where $b_v = \sum _{v^{\prime }\ne v} \exp (-\Vert u_{v^{\prime }}\Vert _1)$. The last inequality holds because $x/(x+b_v)$ is increasing in $x>0$. Therefore, the log-probability $\log p_{\theta }(\hat{Y}\,|\,C)$ diverges as $|\hat{Y}| \rightarrow \infty $, and thus $p_{\theta }(\hat{Y}\,|\,C) = 0$, which implies the decoding algorithm $\mathcal {F}$ is inconsistent by Lemma UNKREF25. Greedy decoding, beam search, top-$k$ sampling, and nucleus sampling are all inconsistent according to this theorem; there are consistent models $p_{\theta }$ that induce inconsistent distributions when paired with these decoding algorithms.
Fixing the inconsistency
In this section, we consider two ways to prevent inconsistency arising from incomplete decoding algorithms. First, we introduce consistent versions of top-$k$ and nucleus sampling. Second, we introduce the self-terminating recurrent language model, which is consistent when paired with any of the decoding algorithms considered in this paper.
Fixing the inconsistency ::: Consistent Sampling Algorithms
The proof of Theorem UNKREF27 suggests that inconsistency of incomplete decoding algorithms arises from the fact that $\left<\text{eos}\right>$ may be excluded indefinitely from the set of top-ranked tokens. We propose a simple modification to top-$k$ and nucleus sampling that forces $\left<\text{eos}\right>$ to be included at each step of decoding. First, we give a condition for when a particular model $p_{\theta }$ paired with a decoding algorithm $\mathcal {F}$ is consistent.
Theorem 4.1 Let $p_{\theta }$ be a consistent recurrent language model. If a decoding algorithm $\mathcal {F}$ satisfies $q_{\mathcal {F}}(\left<\text{eos}\right>|\,y_{<t}, C) \ge p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ for every prefix $y_{<t}$ and context $C$, then the decoding algorithm $\mathcal {F}$ is consistent with respect to the model $p_{\theta }$.
Let $P^{\prime }_{t-1}$ denote a set of all prefixes $y_{<t}$ of length $t-1$. For $t\ge 1$,
Taking the limit $t\rightarrow \infty $ and expectation over $C$ on both sides, we have
from which the decoding algorithm is consistent.
We define consistent variants of top-$k$ and nucleus sampling which satisfy this condition.
Definition 4.1 (Consistent top-$k$ sampling) Consistent top-$k$ sampling is top-$k$ sampling with the following modified proposal distribution:
where $V^{\prime } = \left\lbrace \left<\text{eos}\right>\right\rbrace \cup \underset{v^{\prime }}{\arg \text{top-k}}\ p_{\theta }(v^{\prime }\,|\,y_{<t}, C)$.
Definition 4.2 (Consistent nucleus sampling) Consistent nucleus sampling is nucleus sampling with the following modified proposal distribution:
The induced probability of $\left<\text{eos}\right>$ under these two algorithms is always equal to or larger than the model's probability. By Theorem UNKREF29, these algorithms are consistent with respect to any consistent recurrent language model.
Fixing the inconsistency ::: A Self-Terminating Recurrent Language Model
Although these consistent sampling algorithms can be used with any recurrent language model, their stochastic nature may not be suitable for finding a single, highly probable sequence. To avoid this limitation, we propose the self-terminating recurrent language model (STRLM).
Definition 4.3 (Self-terminating recurrent language model) A self-terminating recurrent language model computes the following conditional probability at each time step:
where
with $\sigma : \mathbb {R} \rightarrow [0,1-\epsilon ]$ and $\epsilon \in (0,1)$. $h_t$ is computed as in the original recurrent language model.
The underlying idea is that the probability of $\left<\text{eos}\right>$ increases monotonically. The model is consistent when paired with greedy decoding.
Theorem 4.2 Greedy decoding is consistent with respect to any self-terminating recurrent language model.
Let $p_{t}^{\left<\text{eos}\right>}$ denote $p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ and $a_{t}^{\left<\text{eos}\right>}$ denote $u_{\left<\text{eos}\right>}^\top h_t + c_{\left<\text{eos}\right>}$. By Definition UNKREF33 we have
Take $B=-\log 2 / \log (1-\epsilon )$. We then have $p_{t}^{\left<\text{eos}\right>}>1/2$ for all $t > B$, which implies that $\left<\text{eos}\right>$ is always the most probable token after time step $B$. Hence, the sequence length is less than $B$ with probability 1. Beam search is also consistent with respect to any self-terminating recurrent language model according to a similar argument; see Appendix for the proof.
Empirical Validation
The theoretical results rely on the existence of a model that results in inconsistency; it remains to be shown that inconsistency with respect to incomplete decoding occurs with recurrent language models encountered in practice. Moreover, while the proposed consistent sampling methods and self-terminating recurrent language model carry theoretical guarantees in terms of consistency, we must check whether they retain language modeling quality. To do so, we perform two experiments using a sequence completion task. In each experiment, we use the beginning of a sequence as context, then decode continuations from a trained recurrent language model and measure the proportion of non-terminated sequences in order to approximately measure inconsistency. The first experiment (§SECREF45) shows that inconsistency occurs in practice, and the second experiment (§SECREF47) shows the effectiveness of the proposed approaches.
Empirical Validation ::: Sequence completion.
We evaluate recurrent language models on a sequence completion task, which has previously been used to evaluate the effectiveness of sequence models, e.g. BIBREF20, BIBREF21, BIBREF2, BIBREF5, BIBREF10. Sequence completion is a general setting for studying the behavior of language models, encompassing machine translation BIBREF0, story generation BIBREF15, and dialogue modeling BIBREF1. The task consists of decoding a continuation $\hat{Y}\sim \mathcal {F}(p_{\theta }, C)$ given a length-$k$ prefix $C=(c_1,\ldots ,c_k)$, resulting in a completion $(c_1,\ldots ,c_k,\hat{y}_1\ldots ,\hat{y}_T)$.
Empirical Validation ::: Dataset.
We use the Wikitext2 dataset BIBREF17 consisting of paragraphs from Wikipedia, since it has frequently been used to evaluate language models BIBREF22, BIBREF23, BIBREF24. We split each paragraph into sentences using Spacy, resulting in roughly 100k sequences (78,274 train, 8,464 valid, 9,708 test). We split each sequence, using the first $k$ tokens as a context and the remaining tokens as a continuation. To ensure that each sequence contains a prefix, we prepend padding tokens to make it length $k$. Special $\left<\text{bos}\right>$ and $\left<\text{eos}\right>$ tokens are then inserted at the beginning and end of every sequence. Our experiments use $k=10$. We model sequences at the word level with a vocabulary size of 33,182. The average training sequence length is 24 tokens, with a maximum of 137.
Empirical Validation ::: Context distribution.
We define empirical context distributions with prefixes from the train, valid, and test sets,
where $\mathcal {D}=\lbrace (C^{(n)},Y^{(n)})\rbrace _{n=1}^{N}$ is a dataset split.
Empirical Validation ::: Evaluation metrics.
We use finite sequences to approximately measure the consistency of a model paired with a decoding algorithm, since decoding an infinite-length sequence is impossible. We use the proportion of decoded continuations that are longer than a predefined limit,
where $\hat{Y}^{(n)}\sim \mathcal {F}(p_{\theta }, C^{(n)})$ for each context $C^{(n)}$ in $\mathcal {D}$. We call $r_L$ the non-termination ratio of the decoding algorithm $\mathcal {F}$ for an underlying model and context distribution. A value of $r_L$ greater than zero means that some sequences did not terminate within $L$ steps. When $L$ is infinity, this implies that the model paired with the decoding algorithm is inconsistent. In practice, we use a finite $L$ that is substantially larger than the maximum training sequence length, and we interpret a non-zero $r_L$ as evidence that the model paired with the decoding algorithm is inconsistent. We use $L=1500$, which is more than 10 times the maximum training sequence length.
In each experiment, we report the mean and standard deviation of metrics across 10 independent initializations. Unless specified otherwise, we report metrics using the test context distribution, since the train, valid, and randomly generated context distributions had similar results.
Empirical Validation ::: Training.
We train recurrent language models for sequence completion with maximum likelihood, using the following loss on each sequence $Y=(c_1,\ldots ,c_k,y_1,\ldots ,y_T)$:
This amounts to running the full training sequence through a recurrent model and zeroing the loss for the first $k$ tokens, so that the first $k$ steps correspond to learning a $g_{\theta }$ that encodes the context. Each model is trained on a single Nvidia P40 GPU for up to 100 epochs, stopping early when validation perplexity does not decrease for 10 consecutive epochs.
Empirical Validation ::: Models.
We consider recurrent neural networks with hyperbolic tangent activations ($\tanh $-RNN) BIBREF11 and LSTM units (LSTM-RNN) BIBREF13. We perform an initial hyper-parameter sweep and select the best set of hyper-parameters for each of $\tanh $-RNN and LSTM-RNN based on the validation perplexities. With this best set of hyperparameters, we train each of these models with 10 different initializations. The choice of $\tanh $ and LSTM RNNs implies that all of the recurrent language models that we train are consistent according to Lemma UNKREF23. Our LSTM models achieve similar test perplexity ($91.86 \pm 0.4$) to those reported in previous work BIBREF24; see Appendix for further details.
Additionally, we train self-terminating $\tanh $-RNN and LSTM-RNN variants (Definition UNKREF33) at various values of $\epsilon $, which controls a lower bound on the termination probability at each step. We use $\sigma (x)=(1-\epsilon )\text{sigmoid}(x)$. We use the hyper-parameters selected in the preceding grid search.
Empirical Validation ::: Inconsistency of Recurrent Language Models
In this experiment, we demonstrate evidence of inconsistency with incomplete decoding methods (Theorem UNKREF27).
Table TABREF43 shows non-termination ratios for the recurrent language models using the incomplete decoding algorithms considered in this work, along with ancestral sampling. Decoding with ancestral sampling always resulted in sequences that terminated within $L$ steps, since the induced distribution is the same as that of the consistent model. On the other hand, the non-zero non-termination ratios for the incomplete decoding algorithms suggest inconsistency with respect to each algorithm, providing evidence for Theorem UNKREF27.
In particular, greedy search, beam search, and nucleus sampling yielded non-terminating sequences with both the $\tanh $ and LSTM RNNs. Using greedy decoding, roughly 6% of all contexts resulted in a non-terminating continuation with the $\tanh $-RNN, and roughly 1% with the LSTM-RNN. Nucleus sampling also produced non-terminating sequences with the $\tanh $-RNN (2.49%, nuc-0.2) and LSTM-RNN (0.76%, nuc-0.2), with the amount of non-termination decreasing as $\mu $ increased (see Definition UNKREF11), likely due to $\left<\text{eos}\right>$ having a higher chance of being included in $V_{\mu }$. Top-$k$ sampling resulted in non-terminating sequences with the $\tanh $-RNN, but not with the LSTM, implying that $\left<\text{eos}\right>$ was ranked within the top $k$ positions on at least one timestep during each decoding. Beam search produced non-terminating sequences with both the $\tanh $-RNN (beam-2,4) and LSTM-RNN (beam-2) models. This means that $\left<\text{eos}\right>$ was outside of the top tokens (determined by the beam width) considered at each step, since in our experiments we terminated the beam search when a single beam prefix contained $\left<\text{eos}\right>$. With the LSTM-RNN, a larger beam width (beam-4) prevented non-termination.
Empirical Validation ::: Consistency of the Proposed Methods
In this experiment, we evaluate the consistent variants of top-$k$ and nucleus sampling (§SECREF28) as well as the self-terminating recurrent language model (§SECREF32) in terms of consistency and language modeling quality.
Empirical Validation ::: Consistency of the Proposed Methods ::: Consistent sampling.
Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\left<\text{eos}\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.
Empirical Validation ::: Consistency of the Proposed Methods ::: Self-terminating RNN.
As seen in Table TABREF50, the self-terminating recurrent language models with $\epsilon \in \lbrace 10^{-2},10^{-3}\rbrace $ are consistent with respect to greedy decoding, at the expense of perplexity compared to the vanilla model. The value of $\epsilon $ from Definition UNKREF33, which controls a lower-bound on termination probability at each step, influences both $r_L$ and perplexity. When $\epsilon $ is too large ($\epsilon =10^{-2}$), perplexity degrades. When $\epsilon $ is too small ($\epsilon =10^{-4}$), the lower-bound grows slowly, so $\left<\text{eos}\right>$ is not guaranteed to be top-ranked within $L$ steps, and the metrics resemble the baseline's. An $\epsilon $ of $10^{-3}$ balanced consistency and language modeling quality, with a zero non-termination ratio and perplexity within 3 points of the baseline.
For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.
Future Directions
The methods we proposed in this paper have focused on how to resolve inconsistency from the viewpoint of decoding algorithms or model parameterization. Another approach is to address the issue of inconsistency in the learning phase.
One interesting direction is to investigate whether maximum likelihood learning is a cause of inconsistency. Given a training set $\left\lbrace (C^{(n)}, Y^{(n)}) \right\rbrace _{n=1}^N$ drawn from a data distribution, maximum likelihood learning solves:
where $\Omega (\theta )$ is a regularizer and $\lambda $ is a regularization weight.
Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding.
Conclusion
We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future.
Acknowledgements
We thank Chris Dyer, Noah Smith and Kevin Knight for valuable discussions. This work was supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). KC thanks eBay and NVIDIA for their support. | There are is a strong conjecture that it might be the reason but it is not proven. |
68edb6a483cdec669c9130c928994654f1c19839 | 68edb6a483cdec669c9130c928994654f1c19839_0 | Q: What metrics are used in challenge?
Text: Introduction
When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information.
We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores.
Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics).
Related Work ::: Visual Question Answering (VQA)
Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features.
Related Work ::: Visual Dialog
The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions.
Models
In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective.
Models ::: Features
Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone).
Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17,
and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$.
History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that
where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM,
We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$.
Models ::: Image-Only Model
We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB:
where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space.
where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers.
where $\textrm {fc}_*$ is an fully-connected layer.
Models ::: Image-Only Model ::: Answer Selection
For each round, there are 100 candidate answers. The $l$-th answer at round $r$,
is encoded in the same way as question and history.
where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$.
Models ::: Image-History Joint Model
We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15.
where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is:
Similarly, the new fused visual representation is:
These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation:
where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section.
Models ::: Image-History Joint Model ::: Round Dropout
To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away.
where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$.
Models ::: Combining Image-Only & Image-History Joint Models
Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion
In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23).
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Consensus
We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach.
where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Instance Dropout
To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$,
where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work.
Models ::: Combining Image-Only & Image-History Joint Models ::: Ensemble
We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits.
Experimental Setup ::: Dataset
We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context.
Experimental Setup ::: Metrics
For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values.
Experimental Setup ::: Training Details
In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss.
Analysis and Results
In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model.
Analysis and Results ::: Human Evaluation: Is Image Alone Enough?
We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy).
Analysis and Results ::: Reduced Question-Answer Rounds
We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score.
Analysis and Results ::: Complementary Relation
If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together.
Analysis and Results ::: Model Combination Results
Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26).
Analysis and Results ::: Model Combination Results ::: Consensus Dropout Fusion Results
As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters.
Analysis and Results ::: Model Combination Results ::: Ensemble Model Results
As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation.
Analysis and Results ::: Final Visual Dialog Test Results
For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average.
Analysis and Results ::: Final Visual Dialog Test Results ::: Ensemble on More Models
We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected.
Ablation Study
Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session.
Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics.
Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model.
Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided.
Conclusion
We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type.
Acknowledgments
We thank the reviewers for their helpful comments. This work was supported by NSF Award #1840131, ARO-YIP Award #W911NF-18-1-0336, and faculty awards from Google, Facebook, Bloomberg, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. | NDCG, MRR, recall@k, mean rank |
f64531e460e0ac09b58584047b7616fdb7dd5b3f | f64531e460e0ac09b58584047b7616fdb7dd5b3f_0 | Q: What model was winner of the Visual Dialog challenge 2019?
Text: Introduction
When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information.
We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores.
Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics).
Related Work ::: Visual Question Answering (VQA)
Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features.
Related Work ::: Visual Dialog
The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions.
Models
In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective.
Models ::: Features
Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone).
Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17,
and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$.
History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that
where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM,
We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$.
Models ::: Image-Only Model
We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB:
where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space.
where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers.
where $\textrm {fc}_*$ is an fully-connected layer.
Models ::: Image-Only Model ::: Answer Selection
For each round, there are 100 candidate answers. The $l$-th answer at round $r$,
is encoded in the same way as question and history.
where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$.
Models ::: Image-History Joint Model
We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15.
where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is:
Similarly, the new fused visual representation is:
These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation:
where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section.
Models ::: Image-History Joint Model ::: Round Dropout
To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away.
where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$.
Models ::: Combining Image-Only & Image-History Joint Models
Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion
In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23).
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Consensus
We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach.
where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Instance Dropout
To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$,
where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work.
Models ::: Combining Image-Only & Image-History Joint Models ::: Ensemble
We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits.
Experimental Setup ::: Dataset
We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context.
Experimental Setup ::: Metrics
For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values.
Experimental Setup ::: Training Details
In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss.
Analysis and Results
In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model.
Analysis and Results ::: Human Evaluation: Is Image Alone Enough?
We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy).
Analysis and Results ::: Reduced Question-Answer Rounds
We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score.
Analysis and Results ::: Complementary Relation
If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together.
Analysis and Results ::: Model Combination Results
Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26).
Analysis and Results ::: Model Combination Results ::: Consensus Dropout Fusion Results
As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters.
Analysis and Results ::: Model Combination Results ::: Ensemble Model Results
As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation.
Analysis and Results ::: Final Visual Dialog Test Results
For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average.
Analysis and Results ::: Final Visual Dialog Test Results ::: Ensemble on More Models
We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected.
Ablation Study
Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session.
Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics.
Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model.
Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided.
Conclusion
We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type.
Acknowledgments
We thank the reviewers for their helpful comments. This work was supported by NSF Award #1840131, ARO-YIP Award #W911NF-18-1-0336, and faculty awards from Google, Facebook, Bloomberg, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. | Unanswerable |
cee29acec4da1b247795daa4e2e82ef8a7b25a64 | cee29acec4da1b247795daa4e2e82ef8a7b25a64_0 | Q: What model was winner of the Visual Dialog challenge 2018?
Text: Introduction
When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information.
We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores.
Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics).
Related Work ::: Visual Question Answering (VQA)
Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features.
Related Work ::: Visual Dialog
The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions.
Models
In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective.
Models ::: Features
Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone).
Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17,
and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$.
History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that
where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM,
We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$.
Models ::: Image-Only Model
We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB:
where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space.
where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers.
where $\textrm {fc}_*$ is an fully-connected layer.
Models ::: Image-Only Model ::: Answer Selection
For each round, there are 100 candidate answers. The $l$-th answer at round $r$,
is encoded in the same way as question and history.
where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$.
Models ::: Image-History Joint Model
We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15.
where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is:
Similarly, the new fused visual representation is:
These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation:
where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section.
Models ::: Image-History Joint Model ::: Round Dropout
To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away.
where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$.
Models ::: Combining Image-Only & Image-History Joint Models
Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion
In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23).
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Consensus
We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach.
where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Instance Dropout
To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$,
where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work.
Models ::: Combining Image-Only & Image-History Joint Models ::: Ensemble
We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits.
Experimental Setup ::: Dataset
We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context.
Experimental Setup ::: Metrics
For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values.
Experimental Setup ::: Training Details
In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss.
Analysis and Results
In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model.
Analysis and Results ::: Human Evaluation: Is Image Alone Enough?
We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy).
Analysis and Results ::: Reduced Question-Answer Rounds
We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score.
Analysis and Results ::: Complementary Relation
If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together.
Analysis and Results ::: Model Combination Results
Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26).
Analysis and Results ::: Model Combination Results ::: Consensus Dropout Fusion Results
As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters.
Analysis and Results ::: Model Combination Results ::: Ensemble Model Results
As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation.
Analysis and Results ::: Final Visual Dialog Test Results
For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average.
Analysis and Results ::: Final Visual Dialog Test Results ::: Ensemble on More Models
We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected.
Ablation Study
Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session.
Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics.
Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model.
Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided.
Conclusion
We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type.
Acknowledgments
We thank the reviewers for their helpful comments. This work was supported by NSF Award #1840131, ARO-YIP Award #W911NF-18-1-0336, and faculty awards from Google, Facebook, Bloomberg, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. | DL-61 |
7e54c7751dbd50d9d14b9f8b13dc94947a46e42f | 7e54c7751dbd50d9d14b9f8b13dc94947a46e42f_0 | Q: Which method for integration peforms better ensemble or consensus dropout fusion with shared parameters?
Text: Introduction
When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information.
We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores.
Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics).
Related Work ::: Visual Question Answering (VQA)
Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features.
Related Work ::: Visual Dialog
The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions.
Models
In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective.
Models ::: Features
Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone).
Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17,
and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$.
History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that
where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM,
We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$.
Models ::: Image-Only Model
We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB:
where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space.
where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers.
where $\textrm {fc}_*$ is an fully-connected layer.
Models ::: Image-Only Model ::: Answer Selection
For each round, there are 100 candidate answers. The $l$-th answer at round $r$,
is encoded in the same way as question and history.
where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$.
Models ::: Image-History Joint Model
We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15.
where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is:
Similarly, the new fused visual representation is:
These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation:
where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section.
Models ::: Image-History Joint Model ::: Round Dropout
To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away.
where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$.
Models ::: Combining Image-Only & Image-History Joint Models
Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion
In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23).
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Consensus
We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach.
where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Instance Dropout
To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$,
where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work.
Models ::: Combining Image-Only & Image-History Joint Models ::: Ensemble
We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits.
Experimental Setup ::: Dataset
We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context.
Experimental Setup ::: Metrics
For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values.
Experimental Setup ::: Training Details
In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss.
Analysis and Results
In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model.
Analysis and Results ::: Human Evaluation: Is Image Alone Enough?
We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy).
Analysis and Results ::: Reduced Question-Answer Rounds
We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score.
Analysis and Results ::: Complementary Relation
If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together.
Analysis and Results ::: Model Combination Results
Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26).
Analysis and Results ::: Model Combination Results ::: Consensus Dropout Fusion Results
As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters.
Analysis and Results ::: Model Combination Results ::: Ensemble Model Results
As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation.
Analysis and Results ::: Final Visual Dialog Test Results
For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average.
Analysis and Results ::: Final Visual Dialog Test Results ::: Ensemble on More Models
We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected.
Ablation Study
Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session.
Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics.
Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model.
Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided.
Conclusion
We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type.
Acknowledgments
We thank the reviewers for their helpful comments. This work was supported by NSF Award #1840131, ARO-YIP Award #W911NF-18-1-0336, and faculty awards from Google, Facebook, Bloomberg, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. | ensemble model |
d3bcfcea00dec99fa26283cdd74ba565bc907632 | d3bcfcea00dec99fa26283cdd74ba565bc907632_0 | Q: How big is dataset for this challenge?
Text: Introduction
When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information.
We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores.
Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics).
Related Work ::: Visual Question Answering (VQA)
Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features.
Related Work ::: Visual Dialog
The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions.
Models
In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective.
Models ::: Features
Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone).
Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17,
and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$.
History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that
where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM,
We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$.
Models ::: Image-Only Model
We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB:
where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space.
where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers.
where $\textrm {fc}_*$ is an fully-connected layer.
Models ::: Image-Only Model ::: Answer Selection
For each round, there are 100 candidate answers. The $l$-th answer at round $r$,
is encoded in the same way as question and history.
where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$.
Models ::: Image-History Joint Model
We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15.
where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is:
Similarly, the new fused visual representation is:
These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation:
where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section.
Models ::: Image-History Joint Model ::: Round Dropout
To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away.
where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$.
Models ::: Combining Image-Only & Image-History Joint Models
Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion
In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23).
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Consensus
We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach.
where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits.
Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Instance Dropout
To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$,
where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work.
Models ::: Combining Image-Only & Image-History Joint Models ::: Ensemble
We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits.
Experimental Setup ::: Dataset
We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context.
Experimental Setup ::: Metrics
For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values.
Experimental Setup ::: Training Details
In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss.
Analysis and Results
In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model.
Analysis and Results ::: Human Evaluation: Is Image Alone Enough?
We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy).
Analysis and Results ::: Reduced Question-Answer Rounds
We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score.
Analysis and Results ::: Complementary Relation
If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together.
Analysis and Results ::: Model Combination Results
Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26).
Analysis and Results ::: Model Combination Results ::: Consensus Dropout Fusion Results
As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters.
Analysis and Results ::: Model Combination Results ::: Ensemble Model Results
As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation.
Analysis and Results ::: Final Visual Dialog Test Results
For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average.
Analysis and Results ::: Final Visual Dialog Test Results ::: Ensemble on More Models
We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected.
Ablation Study
Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session.
Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics.
Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model.
Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided.
Conclusion
We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type.
Acknowledgments
We thank the reviewers for their helpful comments. This work was supported by NSF Award #1840131, ARO-YIP Award #W911NF-18-1-0336, and faculty awards from Google, Facebook, Bloomberg, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. | 133,287 images |
cdf65116a7c50edddcb115e9afd86b2b6accb8ad | cdf65116a7c50edddcb115e9afd86b2b6accb8ad_0 | Q: What open relation extraction tasks did they experiment on?
Text: Introduction
Semantic applications typically work on the basis of intermediate structures derived from sentences. Traditional word-level intermediate structures, such as POS-tags, dependency trees and semantic role labels, have been widely applied. Recently, entity and relation level intermediate structures attract increasingly more attentions.
In general, knowledge based applications require entity and relation level information. For instance, in BIBREF0 , the lexicalized dependency path between two entity mentions was taken as the surface pattern facts. In distant supervision BIBREF1 , the word sequence and dependency path between two entity mentions were taken as evidence of certain relation. In Probase BIBREF2 , candidates of taxonomies were extracted by Hearst patterns BIBREF3 . The surface patterns of relations extracted by Open Information Extraction (OIE) systems BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 worked as the source of question answering systems BIBREF9 , BIBREF10 . In addition, entity and relation level intermediate structures have been proven effective in many other tasks such as text summarization BIBREF11 , BIBREF12 , BIBREF13 , text comprehension, word similarity, word analogy BIBREF14 , and more.
The task of entity/relation level mediate structure extraction studies how facts about entities and relations are expressed by natural language in sentences, and then expresses these facts in an intermediate (and convenient) format. Although entity/relation level intermediate structures have been utilized in many applications, the study of learning these structures is still in an early stage.
Firstly, the problem of extracting different types of entity/relation level intermediate structures has not been considered in a unified fashion. Applications generally need to construct their own handcrafted heuristics to extract required entity/relation level intermediate structures, rather than consulting a commonly available NLP component, as they do for word level intermediate structures. Open IE-v4 system (http://knowitall.github.io/openie/) attempted to build such components by developing two sub-systems, with each extracting one type of intermediate structures, i.e., SRLIE BIBREF15 for verb based relations, and ReNoun BIBREF16 , BIBREF17 for nominal attributes. However, important information about descriptive tags for entities and concept-instance relations between entities were not considered.
Secondly, existing solutions to the task either used pattern matching technique BIBREF2 , BIBREF4 , BIBREF6 , BIBREF7 , or were trained in a self-supervised manner on the data set automatically generated by heuristic patterns or info-box matching BIBREF7 , BIBREF4 , BIBREF8 . It is well-understood that pattern matching typically does not generalize well and the automatically generated samples may contain lots of noises.
This paper aims at tackling some of the well-known challenging problems in OIE systems, in a supervised end-to-end deep learning paradigm. Our contribution can be summarized as three major components: SAOKE format, SAOKE data set, and Logician.
Symbol Aided Open Knowledge Expression (SAOKE) is a knowledge expression form with several desirable properties: (i) SAOKE is literally honest and open-domain. Following the philosophy of OIE systems, SAOKE uses words in the original sentence to express knowledge. (ii) SAOKE provides a unified view over four common types of knowledge: relation, attribute, description and concept. (iii) SAOKE is an accurate expression. With the aid of symbolic system, SAOKE is able to accurately express facts with separated relation phrases, missing information, hidden information, etc.
SAOKE Data Set is a human annotated data set containing 48,248 Chinese sentences and corresponding facts in the SAOKE form. We publish the data set for research purpose. To the best of our knowledge, this is the largest publicly available human annotated data set for open-domain information extraction tasks.
Logician is a supervised end-to-end neural learning algorithm which transforms natural language sentences into facts in the SAOKE form. Logician is trained under the attention-based sequence-to-sequence paradigm, with three mechanisms: restricted copy mechanism to ensure literally honestness, coverage mechanism to alleviate the under extraction and over extraction problem, and gated dependency attention mechanism to incorporate dependency information. Experimental results on four types of open information extraction tasks reveal the superiority of the Logician algorithm.
Our work will demonstrate that SAOKE format is suitable for expressing various types of knowledge and is friendly to end-to-end learning algorithms. Particularly, we will focus on showing that the supervised end-to-end learning is promising for OIE tasks, to extract entity and relation level intermediate structures.
The rest of this paper is organized as follows. Section "SAOKE Format: Symbol Aided Open Knowledge Expression" presents the details of SAOKE. Section "SAOKE Data Set" describes the human labeled SAOKE data set. Section "Logician" describes the Logician algorithm and Section "Empirical Evaluation" evaluates the Logician algorithm and compares its performance with the state-of-the-art algorithms on four OIE tasks. Section "Related Works" discusses the related work and Section "Conclusion" concludes the paper.
SAOKE Format: Symbol Aided Open Knowledge Expression
When reading a sentence in natural language, humans are able to recognize the facts involved in the sentence and accurately express them. In this paper, Symbolic Aided Open Knowledge Expression (SAOKE) is proposed as the form for honestly recording these facts. SAOKE expresses the primary information of sentences in n-ary tuples $(subject,predicate,object_{1},\cdots ,object_{N})$ , and (in this paper) neglects some auxiliary information. In the design of SAOKE, we take four requirements into consideration: completeness, accurateness, atomicity and compactness.
Completeness
After having analyzed a large number of sentences, we observe that the majority of facts can be classified into the following classes:
Relation: Verb/preposition based n-ary relations between entity mentions BIBREF15 , BIBREF6 ;
Attribute:Nominal attributes for entity mentions BIBREF16 , BIBREF17 ;
Description: Descriptive phrases of entity mentions BIBREF18 ;
Concept: Hyponymy and synonym relations among concepts and instances BIBREF19 .
SAOKE is designed to express all these four types of facts. Table 1 presents an example sentence and the involved facts of these four classes in the SAOKE form. We should mention that the sentences and facts in English are directly translated from the corresponding Chinese sentences and facts, and the facts in English may not be the desired outputs of OIE algorithms for those English sentences due to the differences between Chinese and English languages.
Accurateness
SAOKE adopts the ideology of “literally honest”. That is, as much as possible, it uses the words in the original sentences to express the facts. SAOKE follows the philosophy of OIE systems to express various relations without relying on any predefined schema system. There are, however, exceptional situations which are beyond the expression ability of this format. Extra symbols will be introduced to handle these situations, which are explained as follows.
Separated relation phrase: In some languages such as Chinese, relation phrases may be divided into several parts residing in discontinued locations of the sentences. To accurately express these relation phrases, we add placeholders ( $X$ , $Y$ , $Z$ , etc) to build continuous and complete expressions. UTF8gbsn “深受X影响” (“deeply influenced by X” in English) in the example of Table 1 is an instance of relation phrase after such processing.
Abbreviated expression: We explicitly express the information in abbreviated expressions by introducing symbolic predicates. For example, the expression of “Person (birth date - death date)” is transformed into facts: (Person, BIRTH, birth date) (Person, DEATH, death date), and the synonym fact involved in “NBA (National Basketball Association)” is expressed in the form of (NBA, = , National Basketball Association) .
Hidden information: Description of an entity and hyponymy relation between entities are in general expressed implicitly in sentences, and are expressed by symbolic predicates “DESC” and “ISA” respectively, as in Table 1 . Another source of hidden information is the address expression. For example, UTF8gbsn “法国巴黎” (“Paris, France” in English) implies the fact UTF8gbsn (巴黎, LOC, 法国) ((Paris, LOC, France) in English), where the symbol “LOC” means “location”.
Missing information: A sentence may not tell us the exact relation between two entities, or the exact subject/objects of a relation, which are required to be inferred from the context. We use placeholders like “ $X,Y,Z$ ” to denote the missing subjects/objects, and “ $P$ ” to denote the missing predicates.
Atomicity
Atomicity is introduced to eliminate the ambiguity of knowledge expressions. In SAOKE format, each fact is required to be atomic, which means that: (i) it is self-contained for an accurate expression; (ii) it cannot be decomposed into multiple valid facts. We provide examples in Table 2 to help understand these two criteria.
Note that the second criterion implies that any logical connections (including nested expressions) between facts are neglected (e.g., the third case in Table 2 ). This problem of expression relations between facts will be considered in the future version of SAOKE.
Compactness
Natural language may express several facts in a compact form. For example, in a sentence UTF8gbsn “李白爱饮酒作诗” (“Li Bai loved to drink and write poetry” in English ), according to atomicity, two facts should be extracted: UTF8gbsn (李白, 爱, 饮酒)(李白, 爱, 作诗) ( (Li Bai, loved to, drink)(Li Bai, loved to, write poetry) in English ). In this situation, SAOKE adopts a compact expression to merge these two facts into one expression: UTF8gbsn (李白, 爱, [饮酒|作诗]) ( (Li Bai, loved to, [drink| write poetry]) in English ).
The compactness of expressions is introduced to fulfill, but not to violate the rule of “literally honest”. SAOKE does not allow merging facts if facts are not expressed compactly in original sentences. By this means, the differences between the sentences and the corresponding knowledge expressions are reduced, which may help reduce the complexity of learning from data in SAOKE form.
With the above designs, SAOKE is able to express various kinds of facts, with each historically considered by different open information extraction algorithms, for example, verb based relations in SRLIE BIBREF15 and nominal attributes in ReNoun BIBREF16 , BIBREF17 , descriptive phrases for entities in EntityTagger BIBREF18 , and hypernyms in HypeNet BIBREF19 . SAOKE introduces the atomicity to eliminate the ambiguity of knowledge expressions, and achieves better accuracy and compactness with the aid of the symbolic expressions.
SAOKE Data Set
We randomly collect sentences from Baidu Baike (http://baike.baidu.com), and send those sentences to a crowd sourcing company to label the involved facts. The workers are trained with labeling examples and tested with exams. Then the workers with high exam scores are asked to read and understand the facts in the sentences, and express the facts in the SAOKE format. During the procedure, one sentence is only labeled by one worker. Finally, more than forty thousand sentences with about one hundred thousand facts are returned to us. The manual evaluation results on 100 randomly selected sentences show that the fact level precision and recall is 89.5% and 92.2% respectively. Table 3 shows the proportions of four types of facts (described in Section "SAOKE Data Set" ) contained in the data set. Note that the facts with missing predicates represented by “P” are classified into “Unknown”. We publicize the data set at https://ai.baidu.com/broad/subordinate?dataset=saoke.
Prior to the SAOKE data set, an annotated data set for OIE tasks with 3,200 sentences in 2 domains was released in BIBREF20 to evaluate OIE algorithms, in which the data set was said BIBREF20 “13 times larger than the previous largest annotated Open IE corpus”. The SAOKE data set is 16 times larger than the data set in BIBREF20 . To the best of our knowledge, SAOKE data set is the largest publicly available human labeled data set for OIE tasks. Furthermore, the data set released in BIBREF20 was generated from a QA-SRL data set BIBREF21 , which indicates that the data set only contains facts that can be discovered by SRL (Semantic Role Labeling) algorithms, and thus is biased, whereas the SAOKE data set is not biased to an algorithm. Finally, the SAOKE data set contains sentences and facts from a large number of domains.
Logician
Given a sentence $S$ and a set of expected facts (with all the possible types of facts) $\mathbb {F}=\lbrace F_{1},\cdots ,F_{n}\rbrace $ in SAOKE form, we join all the facts in the order that annotators wrote them into a char sequence $F$ as the expected output. We build Logician under the attention-based sequence-to-sequence learning paradigm, to transform $S$ into $F$ , together with the restricted copy mechanism, the coverage mechanism and the gated dependency mechanism.
Attention based Sequence-to-sequence Learning
The attention-based sequence-to-sequence learning BIBREF22 have been successfully applied to the task of generating text and patterns. Given an input sentence $S=[w_{1}^{S},\cdots ,w_{N_{S}}^{S}]$ , the target sequence $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$ and a vocabulary $V$ (including the symbols introduced in Section "SAOKE Format: Symbol Aided Open Knowledge Expression" and the OOV (out of vocabulary) tag ) with size $N_{v}$ , the words $w_{i}^{S}$ and $w_{j}^{F}$ can be represented as one-hot vectors $v_{i}^{S}$ and $v_{j}^{F}$ with dimension $N_{v}$ , and transformed into $N_{e}$ -dimensional distributed representation vectors by an embedding transform $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$0 and $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$1 respectively, where $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$2 . Then the sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$3 is transformed into a sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$4 -dimensional hidden states $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$5 using bi-directional GRU (Gated Recurrent Units) network BIBREF23 , and the sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$6 is transformed into a sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$7 -dimensional hidden states $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$8 using GRU network.
For each position $t$ in the target sequence, the decoder learns a dynamic context vector $c_{t}$ to focus attention on specific location $l$ in the input hidden states $H^{S}$ , then computes the probability of generated words by $p(w_{t}^{F}|\lbrace w_{1}^{F},\cdots ,w_{t-1}^{F}\rbrace ,c_{t})=g(h_{t-1}^{F},s_{t},c_{t})$ , where $s_{t}$ is the hidden state of the GRU decoder, $g$ is the word selection model (details could be found in BIBREF22 ), and $c_{t}$ is computed as $c_{t}=\sum _{j=1}^{N_{S}}\alpha _{tj}h_{j},$ where $\alpha _{tj}=\frac{\exp (e_{tj})}{\sum _{k=1}^{N_{S}}\exp (e_{tk})}$ and $c_{t}$0 is the alignment model to measure the strength of focus on the $c_{t}$1 -th location. $c_{t}$2 , $c_{t}$3 , and $c_{t}$4 are weight matrices.
Restricted Copy Mechanism
The word selection model employed in BIBREF22 selects words from the whole vocabulary $V$ , which evidently violates the “literal honest” requirement of SAOKE. We propose a restricted version of copy mechanism BIBREF24 as the word selection model for Logician:
We collect the symbols introduced in Section "SAOKE Format: Symbol Aided Open Knowledge Expression" into a keyword set $K=\lbrace $ “ $ISA$ ”, “ $DESC$ ”, “ $LOC$ ”, “ $BIRTH$ ”, “ $DEATH$ ”, “ $=$ ”, “ $($ ”, “)”, “ $\$$ ”,“ $[$ ”, “ $ISA$0 ”, “ $ISA$1 ”, “ $ISA$2 ”, “ $ISA$3 ”, “ $ISA$4 ”, “ $ISA$5 ” $ISA$6 where “ $ISA$7 ” is the separator of elements of fact tuples. “ $ISA$8 ”, “ $ISA$9 ”, “ $DESC$0 ”, “ $DESC$1 ” are placeholders . When the decoder is considering generating a word $DESC$2 , it can choose $DESC$3 from either $DESC$4 or $DESC$5 .
$$p(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t})=p_{X}(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t})+p_{K}(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t}),$$ (Eq. 15)
where $p_{X}$ is the probability of copying from $S$ and $p_{K}$ is the probability of selecting from $K$ . Since $S\cap K=\phi $ and there are no unknown words in this problem setting, we compute $p_{X}$ and $p_{K}$ in a simpler way than that in BIBREF24 , as follows: $ p_{X}(w_{t}^{F}=w_{j}^{S}) & = & \frac{1}{Z}\exp (\sigma ((h_{j}^{S})^{T}W_{c})s_{t}),\\ p_{K}(w_{t}^{F}=k_{i}) & = & \frac{1}{Z}\exp (v_{i}^{T}W_{o}s_{t}), $
where the (generic) $Z$ is the normalization term, $k_{i}$ is one of keywords, $v_{i}$ is the one-hot indicator vector for $k_{i}$ , $W_{o}\in \mathbb {R}^{(|K|\times N_{h})}$ , $W_{c}\in \mathbb {R}^{(N_{h}\times N_{h})}$ , and $\sigma $ is a nonlinear activation function.
Coverage Mechanism
In practice, Logician may forget to extract some facts (under-extraction) or extract the same fact many times (over-extraction). We incorporate the coverage mechanism BIBREF25 into Logician to alleviate these problems. Formally, when the decoder considers generating a word $w_{t}^{F}$ , a coverage vector $m_{j}^{t}$ is introduced for each word $w_{j}^{S}$ , and updated as follows: $ m_{j}^{t} & = & \mu (m_{j}^{t-1},\alpha _{tj},h_{j}^{S},s_{t-1})=(1-z_{i})\circ m_{j}^{t-1}+z_{j}\circ \tilde{m}_{j}^{t},\\ \tilde{m}_{j}^{t} & = & \tanh (W_{h}h_{j}^{S}+u_{\alpha }\alpha _{tj}+W_{s}s_{t-1}+U_{m}[r_{i}\circ m_{j}^{t-1}]), $
where $\circ $ is the element-wise multiplication operator. The update gate $z_{j}$ and the reset gate $r_{j}$ are defined as, respectively, $ z_{j} & = & \sigma (W_{h}^{z}h_{j}^{S}+u_{\alpha }^{z}\alpha _{tj}+W_{s}^{z}s_{t-1}+U_{m}^{z}m_{j}^{t-1}),\\ r_{j} & = & \sigma (W_{h}^{r}h_{j}^{S}+u_{\alpha }^{r}\alpha _{tj}+W_{s}^{r}s_{t-1}+U_{m}^{r}m_{j}^{t-1}), $
where $\sigma $ is a logistic sigmoid function. The coverage vector $m_{j}^{t}$ contains the information about the historical attention focused on $w_{j}^{S}$ , and is helpful for deciding whether $w_{j}^{S}$ should be extracted or not. The alignment model is updated as follows BIBREF25 : $ e_{tj}=a(s_{t-1},h_{j}^{S},m_{j}^{t-1})=v_{a}^{T}\tanh (W_{a}s_{t-1}+U_{a}h_{j}^{S}+V_{a}m_{j}^{t-1}), $
where $V_{a}\in \mathbb {R}^{(N_{h}\times N_{h})}$ .
Gated Dependency Attention
The semantic relationship between candidate words and the previously decoded word is valuable to guide the decoder to select the correct word. We introduce the gated dependency attention mechanism to utilize such guidance.
For a sentence $S$ , we extract the dependency tree using NLP tools such as CoreNLP BIBREF26 for English and LTP BIBREF27 for Chinese, and convert the tree into a graph by adding reversed edges with a revised labels (for example, adding $w_{j}^{S}\xrightarrow{}w_{i}^{S}$ for edge $w_{i}^{S}\xrightarrow{}w_{j}^{S}$ in the dependency tree). Then for each pair of words $(w_{i}^{S},w_{j}^{S})$ , the shortest path with labels $L=[w_{1}^{L},\cdots ,w_{N_{L}}^{L}]$ in the graph is computed and mapped into a sequence of $N_{e}$ -dimensional distributed representation vectors $[l_{1},\cdots ,l_{N_{L}}]$ by the embedding operation. One can employ RNN network to convert this sequence of vectors into a feature vector, but RNN operation is time-consuming. We simply concatenate vectors in short paths ( $N_{L}\le $ 3) into a $3N_{e}$ dimensional vector and feed the vector into a two-layer feed forward neural network to generate an $N_{h}$ -dimensional feature vector $w_{j}^{S}\xrightarrow{}w_{i}^{S}$0 . For long paths with $w_{j}^{S}\xrightarrow{}w_{i}^{S}$1 , $w_{j}^{S}\xrightarrow{}w_{i}^{S}$2 is set to a zero vector. We define dependency attention vector $w_{j}^{S}\xrightarrow{}w_{i}^{S}$3 , where $w_{j}^{S}\xrightarrow{}w_{i}^{S}$4 is the sharpened probability $w_{j}^{S}\xrightarrow{}w_{i}^{S}$5 defined in Equation ( 15 ). If $w_{j}^{S}\xrightarrow{}w_{i}^{S}$6 , $w_{j}^{S}\xrightarrow{}w_{i}^{S}$7 represents the semantic relationship between $w_{j}^{S}\xrightarrow{}w_{i}^{S}$8 and $w_{j}^{S}\xrightarrow{}w_{i}^{S}$9 . If $w_{i}^{S}\xrightarrow{}w_{j}^{S}$0 , then $w_{i}^{S}\xrightarrow{}w_{j}^{S}$1 is close to zero. To correctly guide the decoder, we need to gate $w_{i}^{S}\xrightarrow{}w_{j}^{S}$2 to remember the previous attention vector sometimes (for example, when $w_{i}^{S}\xrightarrow{}w_{j}^{S}$3 is selected), and to forget it sometimes (for example, when a new fact is started). Finally, we define $w_{i}^{S}\xrightarrow{}w_{j}^{S}$4 $w_{i}^{S}\xrightarrow{}w_{j}^{S}$5 ) as the gated dependency attention vector, where $w_{i}^{S}\xrightarrow{}w_{j}^{S}$6 is the GRU gated function, and update the alignment model as follows: $w_{i}^{S}\xrightarrow{}w_{j}^{S}$7
where $D_{a}\in \mathbb {R}^{(N_{h}\times N_{h})}$ .
Post processing
For each sequence generated by Logician, we parse it into a set of facts, remove tuples with illegal format or duplicated tuples. The resultant set is taken as the output of the Logician.
Experimental Design
We first measure the utility of various components in Logician to select the optimal model, and then compare this model to the state-of-the-art methods in four types of information extraction tasks: verb/preposition-based relation, nominal attribute, descriptive phrase and hyponymy relation. The SAOKE data set is split into training set, validating set and testing set with ratios of 80%, 10%, 10%, respectively. For all algorithms involved in the experiments, the training set can be used to train the model, the validating set can be used to select an optimal model, and the testing set is used to evaluate the performance.
For each instance pair $(S,F)$ in the test set, where $S$ is the input sentence and $F$ is the formatted string of ground truth of facts, we parse $F$ into a set of tuples $\mathbb {F}=\lbrace F_{i}\rbrace _{j=1}^{M}$ . Given an open information extraction algorithm, it reads $S$ and produces a set of tuples $\mathbb {G}=\lbrace G_{i}\rbrace _{j=1}^{N}$ . To evaluate how well the $\mathbb {G}$ approximates $\mathbb {F}$ , we need to match each $G_{i}$ to a ground truth fact $S$0 and check whether $S$1 tells the same fact as $S$2 . To conduct the match, we compute the similarity between each predicted fact in $S$3 and each ground truth fact in $S$4 , then find the optimal matching to maximize the sum of matched similarities by solving a linear assignment problem BIBREF28 . In the procedure, the similarity between two facts is defined as $S$5
where $G_{i}(l)$ and $F_{j}(l)$ denote the $l$ -th element of tuple $G_{i}$ and $F_{j}$ respectively, $\mathbf {g}(\cdot ,\cdot )$ denotes the gestalt pattern matching BIBREF29 measure for two strings and $\mathbf {n}(\text{$\cdot $)}$ returns the length of the tuple.
Given a matched pair of $G_{i}$ and $F_{j}$ , we propose an automatic approach to judge whether they tell the same fact. They are judged as telling the same fact if one of the following two conditions is satisfied:
$\mathbf {n}(G_{i})=\mathbf {n}(F_{j})$ , and $\mathbf {g}(G_{i}(l),F_{j}(l))\ge 0.85,l=1,\cdots ,\mathbf {n}(G_{i})$ ;
$\mathbf {n}(G_{i})=\mathbf {n}(F_{j})$ , and $\mathbf {g}(\mathcal {S}(G_{i}),\mathcal {S}(F_{j})\ge 0.85$ ;
where $\mathcal {S}$ is a function formatting a fact into a string by filling the arguments into the placeholders of the predicate.
With the automatic judgment, the precision ( $P$ ), recall ( $R$ ) and $F_{1}$ -score over a test set can be computed. By defining a confidence measure and ordering the facts by their confidences, a precision-recall curve can be drawn to illustrate the overall performance of the algorithm. For Logician, the confidence of a fact is computed as the average of log probabilities over all words in that fact.
Beyond the automatic judgment, human evaluation is also employed. Given an algorithm and the corresponding fact confidence measure, we find a threshold that produces approximately 10% recall (measured by automatic judgment) on the validation set of SAOKE data set. A certain number of sentences (200 for verb/preposition based relation extraction task, and 1000 for other three tasks) are randomly chosen from the testing set of SAOKE data set, and the facts extracted from these sentences are filtered with that threshold. Then we invite three volunteers to manually refine the labeled set of facts for each sentence and vote to decide whether each filtered fact is correctly involved in the sentence. The standard precision, recall and $F_{1}$ -score are reported as the human evaluation results.
For each instance pair $(S,F)$ in the training set of SAOKE data set, we split $S$ and $F$ into words using LTP toolset BIBREF27 , and words appearing in more than 2 sentences are added to the vocabulary. By adding the OOV (out of vocabulary) tag, we finally obtain a vocabulary $V$ with size $N_{V}=65,293$ . The dimension of all embedding vectors is set to $N_{e}=200$ , and the dimension of hidden states is set to $N_{h}=256$ . We use a three-layer bi-directional GRU with dimension 128 to encode $\lbrace x_{i}\rbrace _{i=1}^{N_{S}}$ into hidden states $\lbrace h_{i}^{S}\rbrace _{i=1}^{N_{S}}$ , and a two-layer GRU with hidden-dimension 256 to encode the sequence of $\lbrace y_{j}\rbrace _{j=1}^{N_{F}}$ into hidden states $S$0 . Finally, the Logician network is constructed as stated in Section "Logician" . The Logician is then trained using stochastic gradient descent (SGD) with RMSPROP BIBREF30 strategy for 20 epochs with batch size 10 on the training set of SAOKE data set. The model with best $S$1 -score by automatic judgment on the validation set is selected as the trained model. When the model is trained, given a sentence, we employ the greedy search procedure to produce the fact sequences.
Evaluating Components' Utilities
In this section, we analyze the effects of components involved in Logician: restricted copy, coverage, and gated dependency. Since the restricted copy mechanism is the essential requirement of Logician in order to achieve the goal of literally honest, we take the Logician with only copy mechanism (denoted by $Copy$ ) as the baseline, and analyze the effeteness of coverage mechanism (denoted by $Copy+Coverage$ ), gated dependency mechanism (denoted by $Copy+GatedDep$ ) and both (denoted by $All$ ). Furthermore, there is another option of whether or not to involve shallow semantic information such as POS-tag and NER-tag into the model. For models involving such information, the POS-tag and NER-tag of each word in sentence $S$ are annotated using LTP. For each word in $F$ that is not any keyword in $K$ , the POS-tag and NER-tag are copied from the corresponding original word in $S$ . For each keyword in $K$ , a unique POS-tag and a unique NER-tag are assigned to it. Finally, for each word in $S$ or $Copy+Coverage$0 , the POS-tag and NER-tag are mapped into $Copy+Coverage$1 -dimensional distributed representation vectors and are concatenated into $Copy+Coverage$2 or $Copy+Coverage$3 to attend the training.
All models are trained using the same settings described in above section, and the default output facts (without any confidence filtering) are evaluated by the automatic judgment. The results are reported in Table 4 . From the results, we can see that the model involving all the components and shallow tag information archives the best performance. We use that model to attend the comparisons with existing approaches.
Comparison with Existing Approaches
In the task of extracting verb/preposition based facts, we compare our Logician with the following state-of-the-art Chinese OIE algorithms:
SRLIE: our implementation of SRLIE BIBREF15 for the Chinese language, which first uses LTP tool set to extract the semantic role labels, and converts the results into fact tuples using heuristic rules. The confidence of each fact is computed as the ratio of the number of words in the fact to the number of words in the shortest fragment of source sentence that contains all words in the fact.
ZORE : the Chinese Open Relation Extraction system BIBREF31 , which builds a set of patterns by bootstrapping based on dependency parsing results, and uses the patterns to extract relations. We used the program provided by the author of ZORE system BIBREF31 to generate the extraction results in XML format, and developed an algorithm to transform the facts into n-ary tuples, where auxiliary information extracted by ZORE is removed. The confidence measure for ZORE is the same as that for SRLIE.
SRL $_{\text{SAOKE}}$ : our implementation of the states-of-the-art SRL algorithm proposed in BIBREF32 with modifications to fit OIE tasks. $\text{SRL}_{\text{SAOKE}}$ extracts facts in two steps: (i) Predicate head word detection: detects head word for predicate of each possible fact, where head word of a predicate is the last word in the predicate depending on words outside the predicate in the dependency tree. (ii) Element phrase detection: For each detected head word, detects the subject phrase, predicate phrase and object phrases by tagging the sentence with an extended BIOE tagging scheme, which tags the word neighboring the separation point of the phrase by “M” to cope with the separated phrase. We modify the code provided by the author of BIBREF32 to implement above strategy, and then train a model with the same parameter setting in BIBREF32 on the training set of SAOKE data set. The confidence measure for $\text{SRL}_{\text{SAOKE}}$ is computed as the average of log probabilities over all tags of words in facts. Note that $\text{SRL}_{\text{SAOKE}}$ can extract both verb/preposition based relation and nominal attributes, but in this section, we only evaluate the results of the former type of facts.
The precision-recall curves of Logician and above three comparison algorithms are shown in Figure 1 , and the human evaluation results are shown in the first section of Table 5 .
The state-of-the-art
nominal attribute extraction method is ReNoun BIBREF16 , BIBREF17 . However, it relies on a pre-constructed English attribute schema system BIBREF33 which is not available for Chinese, so it is not an available baseline for Chinese. Since $\text{SRL}_{\text{SAOKE}}$ can extract nominal attributes, we compare Logician with $\text{SRL}_{\text{SAOKE}}$ on this task. The precision-recall curves of Logician and $\text{SRL}_{\text{SAOKE}}$ on the nominal attribute extraction task are shown in Figure 1 , and the human evaluation results are shown in the second section of Table 5 .
Descriptive phrase extraction has been considered in BIBREF18 , in which domain names are required to develop patterns to extract candidates for descriptive phrases, so this method is not applicable to open domain tasks. We develop a baseline algorithm (called Semantic Dependency Description Extractor, SDDE) to extract descriptive phrase. It extracts semantic dependency relation between words using LTP toolset, and for each noun $w_n$ which is the parent of some semantic “Desc” relations, identifies a noun phrase $N$ with $w_n$ as its heading word, assembles a descriptive phrase $D$ containing all words with “Desc” relation to $w_n$ , and finally outputs the fact “( $N$ , $DESC$ , $D$ )”. The confidence of fact in SDDE is computed as the ratio of the number of adverbs and adjectives in $D$ to the number of words in $D$ . The precision-recall curves of Logician and SDDE on the descriptive phrase extraction task are shown in Figure 1 , and the human evaluation results are shown in the third section of Table 5 .
HypeNet BIBREF19 is the state-of-the-art algorithm recommended for hyponymy extraction BIBREF34 , which judges whether hyponymy relation exists between two given words. To make it capable of judging hyponymy relation between two phrases, we replace the word embedding vector component in HypeNet by an LSTM network. Two modified HypeNet models are built using different training data sets: (i) $\text{HypeNet}_{\text{Phrase}}$ : using the pairs of phrases with ISA relation in the training set of SAOKE data set (9,407 pairs after the compact expression expansion); (ii) $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ : besides the training set for $\text{HypeNet}_{\text{Phrase}}$ , adding two Chinese hyponymy data sets (1.4 million pair of words in total in hyponymy relation): Tongyici Cilin (Extended) (CilinE for short) BIBREF27 and cleaned Wikipedia Category data BIBREF35 . In both cases, the sentences from both Chinese Wikipedia pages and training set of SAOKE data set are taken as the background corpus for the HypeNet algorithm. In the testing phase, the trained models are used to predict whether the hyponymy relation exists for each pair of noun phrases/words in sentences of the testing set of SAOKE data set. The confidence of a judgment is the predicted probability of the existence of hyponymy relation. The precision-recall curves of Logician, $\text{HypeNet}_{\text{Phrase}}$ and $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ are shown in Figure 1 , and the human evaluation results in the fourth section of Table 5 .
Results Analysis
The experimental results reveal that, Logician outperforms the comparison methods with large margin in first three tasks. For hyponymy detection tasks, Logician overwhelms the $\text{HypeNet}_{\text{Phrase}}$ using the same training data, and produces comparable results to $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ with much less training data. Table 6 exhibits several example sentences and the facts extracted by these algorithms.
The poor performance of pattern-based methods is plausibly due to the noise in SAOKE data set. The sentences in SAOKE data set are randomly selected from a web encyclopedia, with free and casual writing style, are thus more noisy than the training data of NLP toolset used by these methods. In this situation, the NLP toolset may produce poor results, so do the pattern-based methods.
Models learned from the SAOKE data set archive much better performance. Nevertheless, $\text{SRL}_{\text{SAOKE}}$ extracts each fact without knowing whether a candidate word has been used in other facts, which results in the misleading overlap of the word UTF8gbsn“学” (“Learn” in English) between two facts in the first case of Table 6 . Similarly, $\text{HypeNet}_{\text{Phrase}}$ and $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ focus on the semantic vectors of pairs of phrases and their dependency paths in the background corpus. They extract each fact independently from other facts and hence do not know whether there have been any other relations extracted about these two phrases. In other words, for those comparison methods, an important source of information is neglected and a global optimization for all facts involved in sentences is absent.
On the contrary, Logician performs global optimization over the facts involved in each sentence by the sequence-to-sequence learning paradigm with the help of the coverage mechanism, in which facts compete each other to attract the attention of words, but also cooperate to share words. Valuable information is shared between these multiple tasks, which makes Logician consistently superior to other algorithms in these tasks.
Furthermore, $\text{SRL}_{\text{SAOKE}}$ and $\text{HypeNet}$ methods suffer from the OOV problem, such as unfamiliar words/phrases like the person name and school name in the last case of Table 6 . In this situation they may fail to produce a reasonable result. Logician is able to cope with unfamiliar words/phrases by exploiting the context information using deep RNN network with the help of copy mechanism.
Extraction Error Analysis of Logician
We do a preliminary analysis for the results produced by the Logician model. The most notable problem is that it is unable to recall some facts for long or complex sentences. The last case in Table 6 exhibits such situation, where the fact UTF8gbsn(蔡竞,ISA,经济学博士)((Cai Jing, ISA, Ph. D. in economics) in English) is not recalled. This phenomenon indicates that the coverage mechanism may lose effectiveness in this situation. The second class of error is incomplete extraction, as exhibited in the third case in Table 6 . Due to the incomplete extraction, the left parts may interfere the generation of other facts, and result in nonsense results, which is the third class of error. We believe it is helpful to introduce extra rewards into the learning procedure of Logician to overcome these problems. For example, the reward could be the amount of remaining information left after the fact extraction, or the completeness of extracted facts. Developing such rewards and reinforcement learning algorithms using those rewards to refine Logician belongs to our future works.
Knowledge Expressions
Tuple is the most common knowledge expression format for OIE systems to express n-ary relation between subject and objects. Beyond such information, ClausIE BIBREF36 extracts extra information in the tuples: a complement, and one or more adverbials, and OLLIE BIBREF6 extracts additional context information. SAOKE is able to express n-ary relations, and can be easily extended to support the knowledge extracted by ClausIE, but needs to be redesigned to support context information, which belongs to our future work.
However, there is a fundamental difference between SAOKE and tuples in traditional OIE systems. In traditional OIE systems, knowledge expression is generally not directly related to the extraction algorithm. It is a tool to reorganize the extracted knowledge into a form for further easy reading/storing/computing. However, SAOKE is proposed to act as the direct learning target of the end-to-end Logician model. In such end-to-end framework, knowledge representation is the core of the system, which decides what information would be extracted and how complex the learning algorithm would be. To our knowledge, SAOKE is the first attempt to design a knowledge expression friendly to the end-to-end learning algorithm for OIE tasks. Efforts are still needed to make SAOKE more powerful in order to express more complex knowledge such as events.
Relation Extraction
Relation extraction is the task to identify semantic connections between entities. Major existing relation extraction algorithms can be classified into two classes: closed-domain and open-domain. Closed-domain algorithms are learnt to identify a fixed and finite set of relations, using supervised methods BIBREF37 , BIBREF38 , BIBREF39 , BIBREF40 or weakly supervised methods BIBREF1 , BIBREF41 , while the open-domain algorithms, represented by aforementioned OIE systems, discover open-domain relations without predefined schema. Beyond these two classes, methods like universal schema BIBREF42 are able to learn from both data with fixed and finite set of relations, such as relations in Freebase, and data with open-domain surface relations produced by heuristic patterns or OIE systems.
Logician can be used as an OIE system to extract open-domain relation between entities, and act as sub-systems for knowledge base construction/completion with the help of schema mapping BIBREF43 . Compared with existing OIE systems, which are pattern-based or self-supervised by labeling samples using patterns BIBREF13 , to our knowledge Logician is the first model trained in a supervised end-to-end approach for OIE task, which has exhibited powerful ability in our experiments. There are some neural based end-to-end systems BIBREF39 , BIBREF40 , BIBREF41 proposed for relation extraction, but they all aim to solve the close-domain problem.
However, Logician is not limited to relation extraction task. First, Logician extracts more information beyond relations. Second, Logician focuses on examining how natural languages express facts BIBREF5 , and producing helpful intermediate structures for high level tasks.
Language to Logic
Efforts had been made to map natural language sentences into logical form. Some approaches such as BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 learn the mapping under the supervision of manually labeled logical forms, while others BIBREF48 , BIBREF49 are indirectly supervised by distant information, system rewards, etc. However, all previous works rely on a pre-defined, domain specific logical system, which limits their ability to learn facts out of the pre-defined logical system.
Logician can be viewed as a system that maps language to natural logic, in which the majority of information is expressed by natural phrase. Other than systems mentioned above which aim at execution using the logical form, Logician focuses on understanding how the fact and logic are expressed by natural language. Further mapping to domain-specific logical system or even executor can be built on the basis of Logician's output, and we believe that, with the help of Logician, the work would be easier and the overall performance of the system may be improved.
Facts to Language
The problem of generating sentences from a set of facts has attracted a lot of attentions BIBREF50 , BIBREF51 , BIBREF52 , BIBREF53 . These models focus on facts with a predefined schema from a specific problem domain, such as people biographies and basketball game records, but could not work on open domain. The SAOKE data set provides an opportunity to extend the ability of these models into open domain.
Duality between Knowledge and Language
As mentioned in above sections, the SAOKE data set provides examples of dual mapping between facts and sentences. Duality has been verified to be useful to promote the performance of agents in many NLP tasks, such as back-and-forth translation BIBREF54 , and question-answering BIBREF55 . It is a promising approach to use the duality between knowledge and language to improve the performance of Logician.
Conclusion
In this paper, we consider the open information extraction (OIE) problem for a variety of types of facts in a unified view. Our solution consists of three components: SAOKE format, SAOKE data set, and Logician. SAOKE form is designed to express different types of facts in a unified manner. We publicly release the largest manually labeled data set for OIE tasks in SAOKE form. Using the labeled SAOKE data set, we train an end-to-end neural sequence-to-sequence model, called Logician, to transform sentences in natural language into facts. The experiments reveal the superiority of Logician in various open-domain information extraction tasks to the state-of-the-art algorithms.
Regarding future work, there are at least three promising directions. Firstly, one can investigate knowledge expression methods to extend SAOKE to express more complex knowledge, for tasks such as event extraction. Secondly, one can develop novel learning strategies to improve the performance of Logician and adapt the algorithm to the extended future version of SAOKE. Thirdly, one can extend SAOKE format and Logician algorithm in other languages. | verb/preposition-based relation, nominal attribute, descriptive phrase and hyponymy relation. |
c8031c1629d270dedc3b0c16dcb7410524ff1bab | c8031c1629d270dedc3b0c16dcb7410524ff1bab_0 | Q: How is Logician different from traditional seq2seq models?
Text: Introduction
Semantic applications typically work on the basis of intermediate structures derived from sentences. Traditional word-level intermediate structures, such as POS-tags, dependency trees and semantic role labels, have been widely applied. Recently, entity and relation level intermediate structures attract increasingly more attentions.
In general, knowledge based applications require entity and relation level information. For instance, in BIBREF0 , the lexicalized dependency path between two entity mentions was taken as the surface pattern facts. In distant supervision BIBREF1 , the word sequence and dependency path between two entity mentions were taken as evidence of certain relation. In Probase BIBREF2 , candidates of taxonomies were extracted by Hearst patterns BIBREF3 . The surface patterns of relations extracted by Open Information Extraction (OIE) systems BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 worked as the source of question answering systems BIBREF9 , BIBREF10 . In addition, entity and relation level intermediate structures have been proven effective in many other tasks such as text summarization BIBREF11 , BIBREF12 , BIBREF13 , text comprehension, word similarity, word analogy BIBREF14 , and more.
The task of entity/relation level mediate structure extraction studies how facts about entities and relations are expressed by natural language in sentences, and then expresses these facts in an intermediate (and convenient) format. Although entity/relation level intermediate structures have been utilized in many applications, the study of learning these structures is still in an early stage.
Firstly, the problem of extracting different types of entity/relation level intermediate structures has not been considered in a unified fashion. Applications generally need to construct their own handcrafted heuristics to extract required entity/relation level intermediate structures, rather than consulting a commonly available NLP component, as they do for word level intermediate structures. Open IE-v4 system (http://knowitall.github.io/openie/) attempted to build such components by developing two sub-systems, with each extracting one type of intermediate structures, i.e., SRLIE BIBREF15 for verb based relations, and ReNoun BIBREF16 , BIBREF17 for nominal attributes. However, important information about descriptive tags for entities and concept-instance relations between entities were not considered.
Secondly, existing solutions to the task either used pattern matching technique BIBREF2 , BIBREF4 , BIBREF6 , BIBREF7 , or were trained in a self-supervised manner on the data set automatically generated by heuristic patterns or info-box matching BIBREF7 , BIBREF4 , BIBREF8 . It is well-understood that pattern matching typically does not generalize well and the automatically generated samples may contain lots of noises.
This paper aims at tackling some of the well-known challenging problems in OIE systems, in a supervised end-to-end deep learning paradigm. Our contribution can be summarized as three major components: SAOKE format, SAOKE data set, and Logician.
Symbol Aided Open Knowledge Expression (SAOKE) is a knowledge expression form with several desirable properties: (i) SAOKE is literally honest and open-domain. Following the philosophy of OIE systems, SAOKE uses words in the original sentence to express knowledge. (ii) SAOKE provides a unified view over four common types of knowledge: relation, attribute, description and concept. (iii) SAOKE is an accurate expression. With the aid of symbolic system, SAOKE is able to accurately express facts with separated relation phrases, missing information, hidden information, etc.
SAOKE Data Set is a human annotated data set containing 48,248 Chinese sentences and corresponding facts in the SAOKE form. We publish the data set for research purpose. To the best of our knowledge, this is the largest publicly available human annotated data set for open-domain information extraction tasks.
Logician is a supervised end-to-end neural learning algorithm which transforms natural language sentences into facts in the SAOKE form. Logician is trained under the attention-based sequence-to-sequence paradigm, with three mechanisms: restricted copy mechanism to ensure literally honestness, coverage mechanism to alleviate the under extraction and over extraction problem, and gated dependency attention mechanism to incorporate dependency information. Experimental results on four types of open information extraction tasks reveal the superiority of the Logician algorithm.
Our work will demonstrate that SAOKE format is suitable for expressing various types of knowledge and is friendly to end-to-end learning algorithms. Particularly, we will focus on showing that the supervised end-to-end learning is promising for OIE tasks, to extract entity and relation level intermediate structures.
The rest of this paper is organized as follows. Section "SAOKE Format: Symbol Aided Open Knowledge Expression" presents the details of SAOKE. Section "SAOKE Data Set" describes the human labeled SAOKE data set. Section "Logician" describes the Logician algorithm and Section "Empirical Evaluation" evaluates the Logician algorithm and compares its performance with the state-of-the-art algorithms on four OIE tasks. Section "Related Works" discusses the related work and Section "Conclusion" concludes the paper.
SAOKE Format: Symbol Aided Open Knowledge Expression
When reading a sentence in natural language, humans are able to recognize the facts involved in the sentence and accurately express them. In this paper, Symbolic Aided Open Knowledge Expression (SAOKE) is proposed as the form for honestly recording these facts. SAOKE expresses the primary information of sentences in n-ary tuples $(subject,predicate,object_{1},\cdots ,object_{N})$ , and (in this paper) neglects some auxiliary information. In the design of SAOKE, we take four requirements into consideration: completeness, accurateness, atomicity and compactness.
Completeness
After having analyzed a large number of sentences, we observe that the majority of facts can be classified into the following classes:
Relation: Verb/preposition based n-ary relations between entity mentions BIBREF15 , BIBREF6 ;
Attribute:Nominal attributes for entity mentions BIBREF16 , BIBREF17 ;
Description: Descriptive phrases of entity mentions BIBREF18 ;
Concept: Hyponymy and synonym relations among concepts and instances BIBREF19 .
SAOKE is designed to express all these four types of facts. Table 1 presents an example sentence and the involved facts of these four classes in the SAOKE form. We should mention that the sentences and facts in English are directly translated from the corresponding Chinese sentences and facts, and the facts in English may not be the desired outputs of OIE algorithms for those English sentences due to the differences between Chinese and English languages.
Accurateness
SAOKE adopts the ideology of “literally honest”. That is, as much as possible, it uses the words in the original sentences to express the facts. SAOKE follows the philosophy of OIE systems to express various relations without relying on any predefined schema system. There are, however, exceptional situations which are beyond the expression ability of this format. Extra symbols will be introduced to handle these situations, which are explained as follows.
Separated relation phrase: In some languages such as Chinese, relation phrases may be divided into several parts residing in discontinued locations of the sentences. To accurately express these relation phrases, we add placeholders ( $X$ , $Y$ , $Z$ , etc) to build continuous and complete expressions. UTF8gbsn “深受X影响” (“deeply influenced by X” in English) in the example of Table 1 is an instance of relation phrase after such processing.
Abbreviated expression: We explicitly express the information in abbreviated expressions by introducing symbolic predicates. For example, the expression of “Person (birth date - death date)” is transformed into facts: (Person, BIRTH, birth date) (Person, DEATH, death date), and the synonym fact involved in “NBA (National Basketball Association)” is expressed in the form of (NBA, = , National Basketball Association) .
Hidden information: Description of an entity and hyponymy relation between entities are in general expressed implicitly in sentences, and are expressed by symbolic predicates “DESC” and “ISA” respectively, as in Table 1 . Another source of hidden information is the address expression. For example, UTF8gbsn “法国巴黎” (“Paris, France” in English) implies the fact UTF8gbsn (巴黎, LOC, 法国) ((Paris, LOC, France) in English), where the symbol “LOC” means “location”.
Missing information: A sentence may not tell us the exact relation between two entities, or the exact subject/objects of a relation, which are required to be inferred from the context. We use placeholders like “ $X,Y,Z$ ” to denote the missing subjects/objects, and “ $P$ ” to denote the missing predicates.
Atomicity
Atomicity is introduced to eliminate the ambiguity of knowledge expressions. In SAOKE format, each fact is required to be atomic, which means that: (i) it is self-contained for an accurate expression; (ii) it cannot be decomposed into multiple valid facts. We provide examples in Table 2 to help understand these two criteria.
Note that the second criterion implies that any logical connections (including nested expressions) between facts are neglected (e.g., the third case in Table 2 ). This problem of expression relations between facts will be considered in the future version of SAOKE.
Compactness
Natural language may express several facts in a compact form. For example, in a sentence UTF8gbsn “李白爱饮酒作诗” (“Li Bai loved to drink and write poetry” in English ), according to atomicity, two facts should be extracted: UTF8gbsn (李白, 爱, 饮酒)(李白, 爱, 作诗) ( (Li Bai, loved to, drink)(Li Bai, loved to, write poetry) in English ). In this situation, SAOKE adopts a compact expression to merge these two facts into one expression: UTF8gbsn (李白, 爱, [饮酒|作诗]) ( (Li Bai, loved to, [drink| write poetry]) in English ).
The compactness of expressions is introduced to fulfill, but not to violate the rule of “literally honest”. SAOKE does not allow merging facts if facts are not expressed compactly in original sentences. By this means, the differences between the sentences and the corresponding knowledge expressions are reduced, which may help reduce the complexity of learning from data in SAOKE form.
With the above designs, SAOKE is able to express various kinds of facts, with each historically considered by different open information extraction algorithms, for example, verb based relations in SRLIE BIBREF15 and nominal attributes in ReNoun BIBREF16 , BIBREF17 , descriptive phrases for entities in EntityTagger BIBREF18 , and hypernyms in HypeNet BIBREF19 . SAOKE introduces the atomicity to eliminate the ambiguity of knowledge expressions, and achieves better accuracy and compactness with the aid of the symbolic expressions.
SAOKE Data Set
We randomly collect sentences from Baidu Baike (http://baike.baidu.com), and send those sentences to a crowd sourcing company to label the involved facts. The workers are trained with labeling examples and tested with exams. Then the workers with high exam scores are asked to read and understand the facts in the sentences, and express the facts in the SAOKE format. During the procedure, one sentence is only labeled by one worker. Finally, more than forty thousand sentences with about one hundred thousand facts are returned to us. The manual evaluation results on 100 randomly selected sentences show that the fact level precision and recall is 89.5% and 92.2% respectively. Table 3 shows the proportions of four types of facts (described in Section "SAOKE Data Set" ) contained in the data set. Note that the facts with missing predicates represented by “P” are classified into “Unknown”. We publicize the data set at https://ai.baidu.com/broad/subordinate?dataset=saoke.
Prior to the SAOKE data set, an annotated data set for OIE tasks with 3,200 sentences in 2 domains was released in BIBREF20 to evaluate OIE algorithms, in which the data set was said BIBREF20 “13 times larger than the previous largest annotated Open IE corpus”. The SAOKE data set is 16 times larger than the data set in BIBREF20 . To the best of our knowledge, SAOKE data set is the largest publicly available human labeled data set for OIE tasks. Furthermore, the data set released in BIBREF20 was generated from a QA-SRL data set BIBREF21 , which indicates that the data set only contains facts that can be discovered by SRL (Semantic Role Labeling) algorithms, and thus is biased, whereas the SAOKE data set is not biased to an algorithm. Finally, the SAOKE data set contains sentences and facts from a large number of domains.
Logician
Given a sentence $S$ and a set of expected facts (with all the possible types of facts) $\mathbb {F}=\lbrace F_{1},\cdots ,F_{n}\rbrace $ in SAOKE form, we join all the facts in the order that annotators wrote them into a char sequence $F$ as the expected output. We build Logician under the attention-based sequence-to-sequence learning paradigm, to transform $S$ into $F$ , together with the restricted copy mechanism, the coverage mechanism and the gated dependency mechanism.
Attention based Sequence-to-sequence Learning
The attention-based sequence-to-sequence learning BIBREF22 have been successfully applied to the task of generating text and patterns. Given an input sentence $S=[w_{1}^{S},\cdots ,w_{N_{S}}^{S}]$ , the target sequence $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$ and a vocabulary $V$ (including the symbols introduced in Section "SAOKE Format: Symbol Aided Open Knowledge Expression" and the OOV (out of vocabulary) tag ) with size $N_{v}$ , the words $w_{i}^{S}$ and $w_{j}^{F}$ can be represented as one-hot vectors $v_{i}^{S}$ and $v_{j}^{F}$ with dimension $N_{v}$ , and transformed into $N_{e}$ -dimensional distributed representation vectors by an embedding transform $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$0 and $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$1 respectively, where $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$2 . Then the sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$3 is transformed into a sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$4 -dimensional hidden states $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$5 using bi-directional GRU (Gated Recurrent Units) network BIBREF23 , and the sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$6 is transformed into a sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$7 -dimensional hidden states $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$8 using GRU network.
For each position $t$ in the target sequence, the decoder learns a dynamic context vector $c_{t}$ to focus attention on specific location $l$ in the input hidden states $H^{S}$ , then computes the probability of generated words by $p(w_{t}^{F}|\lbrace w_{1}^{F},\cdots ,w_{t-1}^{F}\rbrace ,c_{t})=g(h_{t-1}^{F},s_{t},c_{t})$ , where $s_{t}$ is the hidden state of the GRU decoder, $g$ is the word selection model (details could be found in BIBREF22 ), and $c_{t}$ is computed as $c_{t}=\sum _{j=1}^{N_{S}}\alpha _{tj}h_{j},$ where $\alpha _{tj}=\frac{\exp (e_{tj})}{\sum _{k=1}^{N_{S}}\exp (e_{tk})}$ and $c_{t}$0 is the alignment model to measure the strength of focus on the $c_{t}$1 -th location. $c_{t}$2 , $c_{t}$3 , and $c_{t}$4 are weight matrices.
Restricted Copy Mechanism
The word selection model employed in BIBREF22 selects words from the whole vocabulary $V$ , which evidently violates the “literal honest” requirement of SAOKE. We propose a restricted version of copy mechanism BIBREF24 as the word selection model for Logician:
We collect the symbols introduced in Section "SAOKE Format: Symbol Aided Open Knowledge Expression" into a keyword set $K=\lbrace $ “ $ISA$ ”, “ $DESC$ ”, “ $LOC$ ”, “ $BIRTH$ ”, “ $DEATH$ ”, “ $=$ ”, “ $($ ”, “)”, “ $\$$ ”,“ $[$ ”, “ $ISA$0 ”, “ $ISA$1 ”, “ $ISA$2 ”, “ $ISA$3 ”, “ $ISA$4 ”, “ $ISA$5 ” $ISA$6 where “ $ISA$7 ” is the separator of elements of fact tuples. “ $ISA$8 ”, “ $ISA$9 ”, “ $DESC$0 ”, “ $DESC$1 ” are placeholders . When the decoder is considering generating a word $DESC$2 , it can choose $DESC$3 from either $DESC$4 or $DESC$5 .
$$p(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t})=p_{X}(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t})+p_{K}(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t}),$$ (Eq. 15)
where $p_{X}$ is the probability of copying from $S$ and $p_{K}$ is the probability of selecting from $K$ . Since $S\cap K=\phi $ and there are no unknown words in this problem setting, we compute $p_{X}$ and $p_{K}$ in a simpler way than that in BIBREF24 , as follows: $ p_{X}(w_{t}^{F}=w_{j}^{S}) & = & \frac{1}{Z}\exp (\sigma ((h_{j}^{S})^{T}W_{c})s_{t}),\\ p_{K}(w_{t}^{F}=k_{i}) & = & \frac{1}{Z}\exp (v_{i}^{T}W_{o}s_{t}), $
where the (generic) $Z$ is the normalization term, $k_{i}$ is one of keywords, $v_{i}$ is the one-hot indicator vector for $k_{i}$ , $W_{o}\in \mathbb {R}^{(|K|\times N_{h})}$ , $W_{c}\in \mathbb {R}^{(N_{h}\times N_{h})}$ , and $\sigma $ is a nonlinear activation function.
Coverage Mechanism
In practice, Logician may forget to extract some facts (under-extraction) or extract the same fact many times (over-extraction). We incorporate the coverage mechanism BIBREF25 into Logician to alleviate these problems. Formally, when the decoder considers generating a word $w_{t}^{F}$ , a coverage vector $m_{j}^{t}$ is introduced for each word $w_{j}^{S}$ , and updated as follows: $ m_{j}^{t} & = & \mu (m_{j}^{t-1},\alpha _{tj},h_{j}^{S},s_{t-1})=(1-z_{i})\circ m_{j}^{t-1}+z_{j}\circ \tilde{m}_{j}^{t},\\ \tilde{m}_{j}^{t} & = & \tanh (W_{h}h_{j}^{S}+u_{\alpha }\alpha _{tj}+W_{s}s_{t-1}+U_{m}[r_{i}\circ m_{j}^{t-1}]), $
where $\circ $ is the element-wise multiplication operator. The update gate $z_{j}$ and the reset gate $r_{j}$ are defined as, respectively, $ z_{j} & = & \sigma (W_{h}^{z}h_{j}^{S}+u_{\alpha }^{z}\alpha _{tj}+W_{s}^{z}s_{t-1}+U_{m}^{z}m_{j}^{t-1}),\\ r_{j} & = & \sigma (W_{h}^{r}h_{j}^{S}+u_{\alpha }^{r}\alpha _{tj}+W_{s}^{r}s_{t-1}+U_{m}^{r}m_{j}^{t-1}), $
where $\sigma $ is a logistic sigmoid function. The coverage vector $m_{j}^{t}$ contains the information about the historical attention focused on $w_{j}^{S}$ , and is helpful for deciding whether $w_{j}^{S}$ should be extracted or not. The alignment model is updated as follows BIBREF25 : $ e_{tj}=a(s_{t-1},h_{j}^{S},m_{j}^{t-1})=v_{a}^{T}\tanh (W_{a}s_{t-1}+U_{a}h_{j}^{S}+V_{a}m_{j}^{t-1}), $
where $V_{a}\in \mathbb {R}^{(N_{h}\times N_{h})}$ .
Gated Dependency Attention
The semantic relationship between candidate words and the previously decoded word is valuable to guide the decoder to select the correct word. We introduce the gated dependency attention mechanism to utilize such guidance.
For a sentence $S$ , we extract the dependency tree using NLP tools such as CoreNLP BIBREF26 for English and LTP BIBREF27 for Chinese, and convert the tree into a graph by adding reversed edges with a revised labels (for example, adding $w_{j}^{S}\xrightarrow{}w_{i}^{S}$ for edge $w_{i}^{S}\xrightarrow{}w_{j}^{S}$ in the dependency tree). Then for each pair of words $(w_{i}^{S},w_{j}^{S})$ , the shortest path with labels $L=[w_{1}^{L},\cdots ,w_{N_{L}}^{L}]$ in the graph is computed and mapped into a sequence of $N_{e}$ -dimensional distributed representation vectors $[l_{1},\cdots ,l_{N_{L}}]$ by the embedding operation. One can employ RNN network to convert this sequence of vectors into a feature vector, but RNN operation is time-consuming. We simply concatenate vectors in short paths ( $N_{L}\le $ 3) into a $3N_{e}$ dimensional vector and feed the vector into a two-layer feed forward neural network to generate an $N_{h}$ -dimensional feature vector $w_{j}^{S}\xrightarrow{}w_{i}^{S}$0 . For long paths with $w_{j}^{S}\xrightarrow{}w_{i}^{S}$1 , $w_{j}^{S}\xrightarrow{}w_{i}^{S}$2 is set to a zero vector. We define dependency attention vector $w_{j}^{S}\xrightarrow{}w_{i}^{S}$3 , where $w_{j}^{S}\xrightarrow{}w_{i}^{S}$4 is the sharpened probability $w_{j}^{S}\xrightarrow{}w_{i}^{S}$5 defined in Equation ( 15 ). If $w_{j}^{S}\xrightarrow{}w_{i}^{S}$6 , $w_{j}^{S}\xrightarrow{}w_{i}^{S}$7 represents the semantic relationship between $w_{j}^{S}\xrightarrow{}w_{i}^{S}$8 and $w_{j}^{S}\xrightarrow{}w_{i}^{S}$9 . If $w_{i}^{S}\xrightarrow{}w_{j}^{S}$0 , then $w_{i}^{S}\xrightarrow{}w_{j}^{S}$1 is close to zero. To correctly guide the decoder, we need to gate $w_{i}^{S}\xrightarrow{}w_{j}^{S}$2 to remember the previous attention vector sometimes (for example, when $w_{i}^{S}\xrightarrow{}w_{j}^{S}$3 is selected), and to forget it sometimes (for example, when a new fact is started). Finally, we define $w_{i}^{S}\xrightarrow{}w_{j}^{S}$4 $w_{i}^{S}\xrightarrow{}w_{j}^{S}$5 ) as the gated dependency attention vector, where $w_{i}^{S}\xrightarrow{}w_{j}^{S}$6 is the GRU gated function, and update the alignment model as follows: $w_{i}^{S}\xrightarrow{}w_{j}^{S}$7
where $D_{a}\in \mathbb {R}^{(N_{h}\times N_{h})}$ .
Post processing
For each sequence generated by Logician, we parse it into a set of facts, remove tuples with illegal format or duplicated tuples. The resultant set is taken as the output of the Logician.
Experimental Design
We first measure the utility of various components in Logician to select the optimal model, and then compare this model to the state-of-the-art methods in four types of information extraction tasks: verb/preposition-based relation, nominal attribute, descriptive phrase and hyponymy relation. The SAOKE data set is split into training set, validating set and testing set with ratios of 80%, 10%, 10%, respectively. For all algorithms involved in the experiments, the training set can be used to train the model, the validating set can be used to select an optimal model, and the testing set is used to evaluate the performance.
For each instance pair $(S,F)$ in the test set, where $S$ is the input sentence and $F$ is the formatted string of ground truth of facts, we parse $F$ into a set of tuples $\mathbb {F}=\lbrace F_{i}\rbrace _{j=1}^{M}$ . Given an open information extraction algorithm, it reads $S$ and produces a set of tuples $\mathbb {G}=\lbrace G_{i}\rbrace _{j=1}^{N}$ . To evaluate how well the $\mathbb {G}$ approximates $\mathbb {F}$ , we need to match each $G_{i}$ to a ground truth fact $S$0 and check whether $S$1 tells the same fact as $S$2 . To conduct the match, we compute the similarity between each predicted fact in $S$3 and each ground truth fact in $S$4 , then find the optimal matching to maximize the sum of matched similarities by solving a linear assignment problem BIBREF28 . In the procedure, the similarity between two facts is defined as $S$5
where $G_{i}(l)$ and $F_{j}(l)$ denote the $l$ -th element of tuple $G_{i}$ and $F_{j}$ respectively, $\mathbf {g}(\cdot ,\cdot )$ denotes the gestalt pattern matching BIBREF29 measure for two strings and $\mathbf {n}(\text{$\cdot $)}$ returns the length of the tuple.
Given a matched pair of $G_{i}$ and $F_{j}$ , we propose an automatic approach to judge whether they tell the same fact. They are judged as telling the same fact if one of the following two conditions is satisfied:
$\mathbf {n}(G_{i})=\mathbf {n}(F_{j})$ , and $\mathbf {g}(G_{i}(l),F_{j}(l))\ge 0.85,l=1,\cdots ,\mathbf {n}(G_{i})$ ;
$\mathbf {n}(G_{i})=\mathbf {n}(F_{j})$ , and $\mathbf {g}(\mathcal {S}(G_{i}),\mathcal {S}(F_{j})\ge 0.85$ ;
where $\mathcal {S}$ is a function formatting a fact into a string by filling the arguments into the placeholders of the predicate.
With the automatic judgment, the precision ( $P$ ), recall ( $R$ ) and $F_{1}$ -score over a test set can be computed. By defining a confidence measure and ordering the facts by their confidences, a precision-recall curve can be drawn to illustrate the overall performance of the algorithm. For Logician, the confidence of a fact is computed as the average of log probabilities over all words in that fact.
Beyond the automatic judgment, human evaluation is also employed. Given an algorithm and the corresponding fact confidence measure, we find a threshold that produces approximately 10% recall (measured by automatic judgment) on the validation set of SAOKE data set. A certain number of sentences (200 for verb/preposition based relation extraction task, and 1000 for other three tasks) are randomly chosen from the testing set of SAOKE data set, and the facts extracted from these sentences are filtered with that threshold. Then we invite three volunteers to manually refine the labeled set of facts for each sentence and vote to decide whether each filtered fact is correctly involved in the sentence. The standard precision, recall and $F_{1}$ -score are reported as the human evaluation results.
For each instance pair $(S,F)$ in the training set of SAOKE data set, we split $S$ and $F$ into words using LTP toolset BIBREF27 , and words appearing in more than 2 sentences are added to the vocabulary. By adding the OOV (out of vocabulary) tag, we finally obtain a vocabulary $V$ with size $N_{V}=65,293$ . The dimension of all embedding vectors is set to $N_{e}=200$ , and the dimension of hidden states is set to $N_{h}=256$ . We use a three-layer bi-directional GRU with dimension 128 to encode $\lbrace x_{i}\rbrace _{i=1}^{N_{S}}$ into hidden states $\lbrace h_{i}^{S}\rbrace _{i=1}^{N_{S}}$ , and a two-layer GRU with hidden-dimension 256 to encode the sequence of $\lbrace y_{j}\rbrace _{j=1}^{N_{F}}$ into hidden states $S$0 . Finally, the Logician network is constructed as stated in Section "Logician" . The Logician is then trained using stochastic gradient descent (SGD) with RMSPROP BIBREF30 strategy for 20 epochs with batch size 10 on the training set of SAOKE data set. The model with best $S$1 -score by automatic judgment on the validation set is selected as the trained model. When the model is trained, given a sentence, we employ the greedy search procedure to produce the fact sequences.
Evaluating Components' Utilities
In this section, we analyze the effects of components involved in Logician: restricted copy, coverage, and gated dependency. Since the restricted copy mechanism is the essential requirement of Logician in order to achieve the goal of literally honest, we take the Logician with only copy mechanism (denoted by $Copy$ ) as the baseline, and analyze the effeteness of coverage mechanism (denoted by $Copy+Coverage$ ), gated dependency mechanism (denoted by $Copy+GatedDep$ ) and both (denoted by $All$ ). Furthermore, there is another option of whether or not to involve shallow semantic information such as POS-tag and NER-tag into the model. For models involving such information, the POS-tag and NER-tag of each word in sentence $S$ are annotated using LTP. For each word in $F$ that is not any keyword in $K$ , the POS-tag and NER-tag are copied from the corresponding original word in $S$ . For each keyword in $K$ , a unique POS-tag and a unique NER-tag are assigned to it. Finally, for each word in $S$ or $Copy+Coverage$0 , the POS-tag and NER-tag are mapped into $Copy+Coverage$1 -dimensional distributed representation vectors and are concatenated into $Copy+Coverage$2 or $Copy+Coverage$3 to attend the training.
All models are trained using the same settings described in above section, and the default output facts (without any confidence filtering) are evaluated by the automatic judgment. The results are reported in Table 4 . From the results, we can see that the model involving all the components and shallow tag information archives the best performance. We use that model to attend the comparisons with existing approaches.
Comparison with Existing Approaches
In the task of extracting verb/preposition based facts, we compare our Logician with the following state-of-the-art Chinese OIE algorithms:
SRLIE: our implementation of SRLIE BIBREF15 for the Chinese language, which first uses LTP tool set to extract the semantic role labels, and converts the results into fact tuples using heuristic rules. The confidence of each fact is computed as the ratio of the number of words in the fact to the number of words in the shortest fragment of source sentence that contains all words in the fact.
ZORE : the Chinese Open Relation Extraction system BIBREF31 , which builds a set of patterns by bootstrapping based on dependency parsing results, and uses the patterns to extract relations. We used the program provided by the author of ZORE system BIBREF31 to generate the extraction results in XML format, and developed an algorithm to transform the facts into n-ary tuples, where auxiliary information extracted by ZORE is removed. The confidence measure for ZORE is the same as that for SRLIE.
SRL $_{\text{SAOKE}}$ : our implementation of the states-of-the-art SRL algorithm proposed in BIBREF32 with modifications to fit OIE tasks. $\text{SRL}_{\text{SAOKE}}$ extracts facts in two steps: (i) Predicate head word detection: detects head word for predicate of each possible fact, where head word of a predicate is the last word in the predicate depending on words outside the predicate in the dependency tree. (ii) Element phrase detection: For each detected head word, detects the subject phrase, predicate phrase and object phrases by tagging the sentence with an extended BIOE tagging scheme, which tags the word neighboring the separation point of the phrase by “M” to cope with the separated phrase. We modify the code provided by the author of BIBREF32 to implement above strategy, and then train a model with the same parameter setting in BIBREF32 on the training set of SAOKE data set. The confidence measure for $\text{SRL}_{\text{SAOKE}}$ is computed as the average of log probabilities over all tags of words in facts. Note that $\text{SRL}_{\text{SAOKE}}$ can extract both verb/preposition based relation and nominal attributes, but in this section, we only evaluate the results of the former type of facts.
The precision-recall curves of Logician and above three comparison algorithms are shown in Figure 1 , and the human evaluation results are shown in the first section of Table 5 .
The state-of-the-art
nominal attribute extraction method is ReNoun BIBREF16 , BIBREF17 . However, it relies on a pre-constructed English attribute schema system BIBREF33 which is not available for Chinese, so it is not an available baseline for Chinese. Since $\text{SRL}_{\text{SAOKE}}$ can extract nominal attributes, we compare Logician with $\text{SRL}_{\text{SAOKE}}$ on this task. The precision-recall curves of Logician and $\text{SRL}_{\text{SAOKE}}$ on the nominal attribute extraction task are shown in Figure 1 , and the human evaluation results are shown in the second section of Table 5 .
Descriptive phrase extraction has been considered in BIBREF18 , in which domain names are required to develop patterns to extract candidates for descriptive phrases, so this method is not applicable to open domain tasks. We develop a baseline algorithm (called Semantic Dependency Description Extractor, SDDE) to extract descriptive phrase. It extracts semantic dependency relation between words using LTP toolset, and for each noun $w_n$ which is the parent of some semantic “Desc” relations, identifies a noun phrase $N$ with $w_n$ as its heading word, assembles a descriptive phrase $D$ containing all words with “Desc” relation to $w_n$ , and finally outputs the fact “( $N$ , $DESC$ , $D$ )”. The confidence of fact in SDDE is computed as the ratio of the number of adverbs and adjectives in $D$ to the number of words in $D$ . The precision-recall curves of Logician and SDDE on the descriptive phrase extraction task are shown in Figure 1 , and the human evaluation results are shown in the third section of Table 5 .
HypeNet BIBREF19 is the state-of-the-art algorithm recommended for hyponymy extraction BIBREF34 , which judges whether hyponymy relation exists between two given words. To make it capable of judging hyponymy relation between two phrases, we replace the word embedding vector component in HypeNet by an LSTM network. Two modified HypeNet models are built using different training data sets: (i) $\text{HypeNet}_{\text{Phrase}}$ : using the pairs of phrases with ISA relation in the training set of SAOKE data set (9,407 pairs after the compact expression expansion); (ii) $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ : besides the training set for $\text{HypeNet}_{\text{Phrase}}$ , adding two Chinese hyponymy data sets (1.4 million pair of words in total in hyponymy relation): Tongyici Cilin (Extended) (CilinE for short) BIBREF27 and cleaned Wikipedia Category data BIBREF35 . In both cases, the sentences from both Chinese Wikipedia pages and training set of SAOKE data set are taken as the background corpus for the HypeNet algorithm. In the testing phase, the trained models are used to predict whether the hyponymy relation exists for each pair of noun phrases/words in sentences of the testing set of SAOKE data set. The confidence of a judgment is the predicted probability of the existence of hyponymy relation. The precision-recall curves of Logician, $\text{HypeNet}_{\text{Phrase}}$ and $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ are shown in Figure 1 , and the human evaluation results in the fourth section of Table 5 .
Results Analysis
The experimental results reveal that, Logician outperforms the comparison methods with large margin in first three tasks. For hyponymy detection tasks, Logician overwhelms the $\text{HypeNet}_{\text{Phrase}}$ using the same training data, and produces comparable results to $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ with much less training data. Table 6 exhibits several example sentences and the facts extracted by these algorithms.
The poor performance of pattern-based methods is plausibly due to the noise in SAOKE data set. The sentences in SAOKE data set are randomly selected from a web encyclopedia, with free and casual writing style, are thus more noisy than the training data of NLP toolset used by these methods. In this situation, the NLP toolset may produce poor results, so do the pattern-based methods.
Models learned from the SAOKE data set archive much better performance. Nevertheless, $\text{SRL}_{\text{SAOKE}}$ extracts each fact without knowing whether a candidate word has been used in other facts, which results in the misleading overlap of the word UTF8gbsn“学” (“Learn” in English) between two facts in the first case of Table 6 . Similarly, $\text{HypeNet}_{\text{Phrase}}$ and $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ focus on the semantic vectors of pairs of phrases and their dependency paths in the background corpus. They extract each fact independently from other facts and hence do not know whether there have been any other relations extracted about these two phrases. In other words, for those comparison methods, an important source of information is neglected and a global optimization for all facts involved in sentences is absent.
On the contrary, Logician performs global optimization over the facts involved in each sentence by the sequence-to-sequence learning paradigm with the help of the coverage mechanism, in which facts compete each other to attract the attention of words, but also cooperate to share words. Valuable information is shared between these multiple tasks, which makes Logician consistently superior to other algorithms in these tasks.
Furthermore, $\text{SRL}_{\text{SAOKE}}$ and $\text{HypeNet}$ methods suffer from the OOV problem, such as unfamiliar words/phrases like the person name and school name in the last case of Table 6 . In this situation they may fail to produce a reasonable result. Logician is able to cope with unfamiliar words/phrases by exploiting the context information using deep RNN network with the help of copy mechanism.
Extraction Error Analysis of Logician
We do a preliminary analysis for the results produced by the Logician model. The most notable problem is that it is unable to recall some facts for long or complex sentences. The last case in Table 6 exhibits such situation, where the fact UTF8gbsn(蔡竞,ISA,经济学博士)((Cai Jing, ISA, Ph. D. in economics) in English) is not recalled. This phenomenon indicates that the coverage mechanism may lose effectiveness in this situation. The second class of error is incomplete extraction, as exhibited in the third case in Table 6 . Due to the incomplete extraction, the left parts may interfere the generation of other facts, and result in nonsense results, which is the third class of error. We believe it is helpful to introduce extra rewards into the learning procedure of Logician to overcome these problems. For example, the reward could be the amount of remaining information left after the fact extraction, or the completeness of extracted facts. Developing such rewards and reinforcement learning algorithms using those rewards to refine Logician belongs to our future works.
Knowledge Expressions
Tuple is the most common knowledge expression format for OIE systems to express n-ary relation between subject and objects. Beyond such information, ClausIE BIBREF36 extracts extra information in the tuples: a complement, and one or more adverbials, and OLLIE BIBREF6 extracts additional context information. SAOKE is able to express n-ary relations, and can be easily extended to support the knowledge extracted by ClausIE, but needs to be redesigned to support context information, which belongs to our future work.
However, there is a fundamental difference between SAOKE and tuples in traditional OIE systems. In traditional OIE systems, knowledge expression is generally not directly related to the extraction algorithm. It is a tool to reorganize the extracted knowledge into a form for further easy reading/storing/computing. However, SAOKE is proposed to act as the direct learning target of the end-to-end Logician model. In such end-to-end framework, knowledge representation is the core of the system, which decides what information would be extracted and how complex the learning algorithm would be. To our knowledge, SAOKE is the first attempt to design a knowledge expression friendly to the end-to-end learning algorithm for OIE tasks. Efforts are still needed to make SAOKE more powerful in order to express more complex knowledge such as events.
Relation Extraction
Relation extraction is the task to identify semantic connections between entities. Major existing relation extraction algorithms can be classified into two classes: closed-domain and open-domain. Closed-domain algorithms are learnt to identify a fixed and finite set of relations, using supervised methods BIBREF37 , BIBREF38 , BIBREF39 , BIBREF40 or weakly supervised methods BIBREF1 , BIBREF41 , while the open-domain algorithms, represented by aforementioned OIE systems, discover open-domain relations without predefined schema. Beyond these two classes, methods like universal schema BIBREF42 are able to learn from both data with fixed and finite set of relations, such as relations in Freebase, and data with open-domain surface relations produced by heuristic patterns or OIE systems.
Logician can be used as an OIE system to extract open-domain relation between entities, and act as sub-systems for knowledge base construction/completion with the help of schema mapping BIBREF43 . Compared with existing OIE systems, which are pattern-based or self-supervised by labeling samples using patterns BIBREF13 , to our knowledge Logician is the first model trained in a supervised end-to-end approach for OIE task, which has exhibited powerful ability in our experiments. There are some neural based end-to-end systems BIBREF39 , BIBREF40 , BIBREF41 proposed for relation extraction, but they all aim to solve the close-domain problem.
However, Logician is not limited to relation extraction task. First, Logician extracts more information beyond relations. Second, Logician focuses on examining how natural languages express facts BIBREF5 , and producing helpful intermediate structures for high level tasks.
Language to Logic
Efforts had been made to map natural language sentences into logical form. Some approaches such as BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 learn the mapping under the supervision of manually labeled logical forms, while others BIBREF48 , BIBREF49 are indirectly supervised by distant information, system rewards, etc. However, all previous works rely on a pre-defined, domain specific logical system, which limits their ability to learn facts out of the pre-defined logical system.
Logician can be viewed as a system that maps language to natural logic, in which the majority of information is expressed by natural phrase. Other than systems mentioned above which aim at execution using the logical form, Logician focuses on understanding how the fact and logic are expressed by natural language. Further mapping to domain-specific logical system or even executor can be built on the basis of Logician's output, and we believe that, with the help of Logician, the work would be easier and the overall performance of the system may be improved.
Facts to Language
The problem of generating sentences from a set of facts has attracted a lot of attentions BIBREF50 , BIBREF51 , BIBREF52 , BIBREF53 . These models focus on facts with a predefined schema from a specific problem domain, such as people biographies and basketball game records, but could not work on open domain. The SAOKE data set provides an opportunity to extend the ability of these models into open domain.
Duality between Knowledge and Language
As mentioned in above sections, the SAOKE data set provides examples of dual mapping between facts and sentences. Duality has been verified to be useful to promote the performance of agents in many NLP tasks, such as back-and-forth translation BIBREF54 , and question-answering BIBREF55 . It is a promising approach to use the duality between knowledge and language to improve the performance of Logician.
Conclusion
In this paper, we consider the open information extraction (OIE) problem for a variety of types of facts in a unified view. Our solution consists of three components: SAOKE format, SAOKE data set, and Logician. SAOKE form is designed to express different types of facts in a unified manner. We publicly release the largest manually labeled data set for OIE tasks in SAOKE form. Using the labeled SAOKE data set, we train an end-to-end neural sequence-to-sequence model, called Logician, to transform sentences in natural language into facts. The experiments reveal the superiority of Logician in various open-domain information extraction tasks to the state-of-the-art algorithms.
Regarding future work, there are at least three promising directions. Firstly, one can investigate knowledge expression methods to extend SAOKE to express more complex knowledge, for tasks such as event extraction. Secondly, one can develop novel learning strategies to improve the performance of Logician and adapt the algorithm to the extended future version of SAOKE. Thirdly, one can extend SAOKE format and Logician algorithm in other languages. | restricted copy mechanism to ensure literally honestness, coverage mechanism to alleviate the under extraction and over extraction problem, and gated dependency attention mechanism to incorporate dependency information |
8c0e8a312b85c4ffdffabeef0d29df1ef8ff7fb2 | 8c0e8a312b85c4ffdffabeef0d29df1ef8ff7fb2_0 | Q: What's the size of the previous largest OpenIE dataset?
Text: Introduction
Semantic applications typically work on the basis of intermediate structures derived from sentences. Traditional word-level intermediate structures, such as POS-tags, dependency trees and semantic role labels, have been widely applied. Recently, entity and relation level intermediate structures attract increasingly more attentions.
In general, knowledge based applications require entity and relation level information. For instance, in BIBREF0 , the lexicalized dependency path between two entity mentions was taken as the surface pattern facts. In distant supervision BIBREF1 , the word sequence and dependency path between two entity mentions were taken as evidence of certain relation. In Probase BIBREF2 , candidates of taxonomies were extracted by Hearst patterns BIBREF3 . The surface patterns of relations extracted by Open Information Extraction (OIE) systems BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 worked as the source of question answering systems BIBREF9 , BIBREF10 . In addition, entity and relation level intermediate structures have been proven effective in many other tasks such as text summarization BIBREF11 , BIBREF12 , BIBREF13 , text comprehension, word similarity, word analogy BIBREF14 , and more.
The task of entity/relation level mediate structure extraction studies how facts about entities and relations are expressed by natural language in sentences, and then expresses these facts in an intermediate (and convenient) format. Although entity/relation level intermediate structures have been utilized in many applications, the study of learning these structures is still in an early stage.
Firstly, the problem of extracting different types of entity/relation level intermediate structures has not been considered in a unified fashion. Applications generally need to construct their own handcrafted heuristics to extract required entity/relation level intermediate structures, rather than consulting a commonly available NLP component, as they do for word level intermediate structures. Open IE-v4 system (http://knowitall.github.io/openie/) attempted to build such components by developing two sub-systems, with each extracting one type of intermediate structures, i.e., SRLIE BIBREF15 for verb based relations, and ReNoun BIBREF16 , BIBREF17 for nominal attributes. However, important information about descriptive tags for entities and concept-instance relations between entities were not considered.
Secondly, existing solutions to the task either used pattern matching technique BIBREF2 , BIBREF4 , BIBREF6 , BIBREF7 , or were trained in a self-supervised manner on the data set automatically generated by heuristic patterns or info-box matching BIBREF7 , BIBREF4 , BIBREF8 . It is well-understood that pattern matching typically does not generalize well and the automatically generated samples may contain lots of noises.
This paper aims at tackling some of the well-known challenging problems in OIE systems, in a supervised end-to-end deep learning paradigm. Our contribution can be summarized as three major components: SAOKE format, SAOKE data set, and Logician.
Symbol Aided Open Knowledge Expression (SAOKE) is a knowledge expression form with several desirable properties: (i) SAOKE is literally honest and open-domain. Following the philosophy of OIE systems, SAOKE uses words in the original sentence to express knowledge. (ii) SAOKE provides a unified view over four common types of knowledge: relation, attribute, description and concept. (iii) SAOKE is an accurate expression. With the aid of symbolic system, SAOKE is able to accurately express facts with separated relation phrases, missing information, hidden information, etc.
SAOKE Data Set is a human annotated data set containing 48,248 Chinese sentences and corresponding facts in the SAOKE form. We publish the data set for research purpose. To the best of our knowledge, this is the largest publicly available human annotated data set for open-domain information extraction tasks.
Logician is a supervised end-to-end neural learning algorithm which transforms natural language sentences into facts in the SAOKE form. Logician is trained under the attention-based sequence-to-sequence paradigm, with three mechanisms: restricted copy mechanism to ensure literally honestness, coverage mechanism to alleviate the under extraction and over extraction problem, and gated dependency attention mechanism to incorporate dependency information. Experimental results on four types of open information extraction tasks reveal the superiority of the Logician algorithm.
Our work will demonstrate that SAOKE format is suitable for expressing various types of knowledge and is friendly to end-to-end learning algorithms. Particularly, we will focus on showing that the supervised end-to-end learning is promising for OIE tasks, to extract entity and relation level intermediate structures.
The rest of this paper is organized as follows. Section "SAOKE Format: Symbol Aided Open Knowledge Expression" presents the details of SAOKE. Section "SAOKE Data Set" describes the human labeled SAOKE data set. Section "Logician" describes the Logician algorithm and Section "Empirical Evaluation" evaluates the Logician algorithm and compares its performance with the state-of-the-art algorithms on four OIE tasks. Section "Related Works" discusses the related work and Section "Conclusion" concludes the paper.
SAOKE Format: Symbol Aided Open Knowledge Expression
When reading a sentence in natural language, humans are able to recognize the facts involved in the sentence and accurately express them. In this paper, Symbolic Aided Open Knowledge Expression (SAOKE) is proposed as the form for honestly recording these facts. SAOKE expresses the primary information of sentences in n-ary tuples $(subject,predicate,object_{1},\cdots ,object_{N})$ , and (in this paper) neglects some auxiliary information. In the design of SAOKE, we take four requirements into consideration: completeness, accurateness, atomicity and compactness.
Completeness
After having analyzed a large number of sentences, we observe that the majority of facts can be classified into the following classes:
Relation: Verb/preposition based n-ary relations between entity mentions BIBREF15 , BIBREF6 ;
Attribute:Nominal attributes for entity mentions BIBREF16 , BIBREF17 ;
Description: Descriptive phrases of entity mentions BIBREF18 ;
Concept: Hyponymy and synonym relations among concepts and instances BIBREF19 .
SAOKE is designed to express all these four types of facts. Table 1 presents an example sentence and the involved facts of these four classes in the SAOKE form. We should mention that the sentences and facts in English are directly translated from the corresponding Chinese sentences and facts, and the facts in English may not be the desired outputs of OIE algorithms for those English sentences due to the differences between Chinese and English languages.
Accurateness
SAOKE adopts the ideology of “literally honest”. That is, as much as possible, it uses the words in the original sentences to express the facts. SAOKE follows the philosophy of OIE systems to express various relations without relying on any predefined schema system. There are, however, exceptional situations which are beyond the expression ability of this format. Extra symbols will be introduced to handle these situations, which are explained as follows.
Separated relation phrase: In some languages such as Chinese, relation phrases may be divided into several parts residing in discontinued locations of the sentences. To accurately express these relation phrases, we add placeholders ( $X$ , $Y$ , $Z$ , etc) to build continuous and complete expressions. UTF8gbsn “深受X影响” (“deeply influenced by X” in English) in the example of Table 1 is an instance of relation phrase after such processing.
Abbreviated expression: We explicitly express the information in abbreviated expressions by introducing symbolic predicates. For example, the expression of “Person (birth date - death date)” is transformed into facts: (Person, BIRTH, birth date) (Person, DEATH, death date), and the synonym fact involved in “NBA (National Basketball Association)” is expressed in the form of (NBA, = , National Basketball Association) .
Hidden information: Description of an entity and hyponymy relation between entities are in general expressed implicitly in sentences, and are expressed by symbolic predicates “DESC” and “ISA” respectively, as in Table 1 . Another source of hidden information is the address expression. For example, UTF8gbsn “法国巴黎” (“Paris, France” in English) implies the fact UTF8gbsn (巴黎, LOC, 法国) ((Paris, LOC, France) in English), where the symbol “LOC” means “location”.
Missing information: A sentence may not tell us the exact relation between two entities, or the exact subject/objects of a relation, which are required to be inferred from the context. We use placeholders like “ $X,Y,Z$ ” to denote the missing subjects/objects, and “ $P$ ” to denote the missing predicates.
Atomicity
Atomicity is introduced to eliminate the ambiguity of knowledge expressions. In SAOKE format, each fact is required to be atomic, which means that: (i) it is self-contained for an accurate expression; (ii) it cannot be decomposed into multiple valid facts. We provide examples in Table 2 to help understand these two criteria.
Note that the second criterion implies that any logical connections (including nested expressions) between facts are neglected (e.g., the third case in Table 2 ). This problem of expression relations between facts will be considered in the future version of SAOKE.
Compactness
Natural language may express several facts in a compact form. For example, in a sentence UTF8gbsn “李白爱饮酒作诗” (“Li Bai loved to drink and write poetry” in English ), according to atomicity, two facts should be extracted: UTF8gbsn (李白, 爱, 饮酒)(李白, 爱, 作诗) ( (Li Bai, loved to, drink)(Li Bai, loved to, write poetry) in English ). In this situation, SAOKE adopts a compact expression to merge these two facts into one expression: UTF8gbsn (李白, 爱, [饮酒|作诗]) ( (Li Bai, loved to, [drink| write poetry]) in English ).
The compactness of expressions is introduced to fulfill, but not to violate the rule of “literally honest”. SAOKE does not allow merging facts if facts are not expressed compactly in original sentences. By this means, the differences between the sentences and the corresponding knowledge expressions are reduced, which may help reduce the complexity of learning from data in SAOKE form.
With the above designs, SAOKE is able to express various kinds of facts, with each historically considered by different open information extraction algorithms, for example, verb based relations in SRLIE BIBREF15 and nominal attributes in ReNoun BIBREF16 , BIBREF17 , descriptive phrases for entities in EntityTagger BIBREF18 , and hypernyms in HypeNet BIBREF19 . SAOKE introduces the atomicity to eliminate the ambiguity of knowledge expressions, and achieves better accuracy and compactness with the aid of the symbolic expressions.
SAOKE Data Set
We randomly collect sentences from Baidu Baike (http://baike.baidu.com), and send those sentences to a crowd sourcing company to label the involved facts. The workers are trained with labeling examples and tested with exams. Then the workers with high exam scores are asked to read and understand the facts in the sentences, and express the facts in the SAOKE format. During the procedure, one sentence is only labeled by one worker. Finally, more than forty thousand sentences with about one hundred thousand facts are returned to us. The manual evaluation results on 100 randomly selected sentences show that the fact level precision and recall is 89.5% and 92.2% respectively. Table 3 shows the proportions of four types of facts (described in Section "SAOKE Data Set" ) contained in the data set. Note that the facts with missing predicates represented by “P” are classified into “Unknown”. We publicize the data set at https://ai.baidu.com/broad/subordinate?dataset=saoke.
Prior to the SAOKE data set, an annotated data set for OIE tasks with 3,200 sentences in 2 domains was released in BIBREF20 to evaluate OIE algorithms, in which the data set was said BIBREF20 “13 times larger than the previous largest annotated Open IE corpus”. The SAOKE data set is 16 times larger than the data set in BIBREF20 . To the best of our knowledge, SAOKE data set is the largest publicly available human labeled data set for OIE tasks. Furthermore, the data set released in BIBREF20 was generated from a QA-SRL data set BIBREF21 , which indicates that the data set only contains facts that can be discovered by SRL (Semantic Role Labeling) algorithms, and thus is biased, whereas the SAOKE data set is not biased to an algorithm. Finally, the SAOKE data set contains sentences and facts from a large number of domains.
Logician
Given a sentence $S$ and a set of expected facts (with all the possible types of facts) $\mathbb {F}=\lbrace F_{1},\cdots ,F_{n}\rbrace $ in SAOKE form, we join all the facts in the order that annotators wrote them into a char sequence $F$ as the expected output. We build Logician under the attention-based sequence-to-sequence learning paradigm, to transform $S$ into $F$ , together with the restricted copy mechanism, the coverage mechanism and the gated dependency mechanism.
Attention based Sequence-to-sequence Learning
The attention-based sequence-to-sequence learning BIBREF22 have been successfully applied to the task of generating text and patterns. Given an input sentence $S=[w_{1}^{S},\cdots ,w_{N_{S}}^{S}]$ , the target sequence $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$ and a vocabulary $V$ (including the symbols introduced in Section "SAOKE Format: Symbol Aided Open Knowledge Expression" and the OOV (out of vocabulary) tag ) with size $N_{v}$ , the words $w_{i}^{S}$ and $w_{j}^{F}$ can be represented as one-hot vectors $v_{i}^{S}$ and $v_{j}^{F}$ with dimension $N_{v}$ , and transformed into $N_{e}$ -dimensional distributed representation vectors by an embedding transform $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$0 and $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$1 respectively, where $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$2 . Then the sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$3 is transformed into a sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$4 -dimensional hidden states $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$5 using bi-directional GRU (Gated Recurrent Units) network BIBREF23 , and the sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$6 is transformed into a sequence of $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$7 -dimensional hidden states $F=[w_{1}^{F},\cdots ,w_{N_{F}}^{F}]$8 using GRU network.
For each position $t$ in the target sequence, the decoder learns a dynamic context vector $c_{t}$ to focus attention on specific location $l$ in the input hidden states $H^{S}$ , then computes the probability of generated words by $p(w_{t}^{F}|\lbrace w_{1}^{F},\cdots ,w_{t-1}^{F}\rbrace ,c_{t})=g(h_{t-1}^{F},s_{t},c_{t})$ , where $s_{t}$ is the hidden state of the GRU decoder, $g$ is the word selection model (details could be found in BIBREF22 ), and $c_{t}$ is computed as $c_{t}=\sum _{j=1}^{N_{S}}\alpha _{tj}h_{j},$ where $\alpha _{tj}=\frac{\exp (e_{tj})}{\sum _{k=1}^{N_{S}}\exp (e_{tk})}$ and $c_{t}$0 is the alignment model to measure the strength of focus on the $c_{t}$1 -th location. $c_{t}$2 , $c_{t}$3 , and $c_{t}$4 are weight matrices.
Restricted Copy Mechanism
The word selection model employed in BIBREF22 selects words from the whole vocabulary $V$ , which evidently violates the “literal honest” requirement of SAOKE. We propose a restricted version of copy mechanism BIBREF24 as the word selection model for Logician:
We collect the symbols introduced in Section "SAOKE Format: Symbol Aided Open Knowledge Expression" into a keyword set $K=\lbrace $ “ $ISA$ ”, “ $DESC$ ”, “ $LOC$ ”, “ $BIRTH$ ”, “ $DEATH$ ”, “ $=$ ”, “ $($ ”, “)”, “ $\$$ ”,“ $[$ ”, “ $ISA$0 ”, “ $ISA$1 ”, “ $ISA$2 ”, “ $ISA$3 ”, “ $ISA$4 ”, “ $ISA$5 ” $ISA$6 where “ $ISA$7 ” is the separator of elements of fact tuples. “ $ISA$8 ”, “ $ISA$9 ”, “ $DESC$0 ”, “ $DESC$1 ” are placeholders . When the decoder is considering generating a word $DESC$2 , it can choose $DESC$3 from either $DESC$4 or $DESC$5 .
$$p(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t})=p_{X}(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t})+p_{K}(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t}),$$ (Eq. 15)
where $p_{X}$ is the probability of copying from $S$ and $p_{K}$ is the probability of selecting from $K$ . Since $S\cap K=\phi $ and there are no unknown words in this problem setting, we compute $p_{X}$ and $p_{K}$ in a simpler way than that in BIBREF24 , as follows: $ p_{X}(w_{t}^{F}=w_{j}^{S}) & = & \frac{1}{Z}\exp (\sigma ((h_{j}^{S})^{T}W_{c})s_{t}),\\ p_{K}(w_{t}^{F}=k_{i}) & = & \frac{1}{Z}\exp (v_{i}^{T}W_{o}s_{t}), $
where the (generic) $Z$ is the normalization term, $k_{i}$ is one of keywords, $v_{i}$ is the one-hot indicator vector for $k_{i}$ , $W_{o}\in \mathbb {R}^{(|K|\times N_{h})}$ , $W_{c}\in \mathbb {R}^{(N_{h}\times N_{h})}$ , and $\sigma $ is a nonlinear activation function.
Coverage Mechanism
In practice, Logician may forget to extract some facts (under-extraction) or extract the same fact many times (over-extraction). We incorporate the coverage mechanism BIBREF25 into Logician to alleviate these problems. Formally, when the decoder considers generating a word $w_{t}^{F}$ , a coverage vector $m_{j}^{t}$ is introduced for each word $w_{j}^{S}$ , and updated as follows: $ m_{j}^{t} & = & \mu (m_{j}^{t-1},\alpha _{tj},h_{j}^{S},s_{t-1})=(1-z_{i})\circ m_{j}^{t-1}+z_{j}\circ \tilde{m}_{j}^{t},\\ \tilde{m}_{j}^{t} & = & \tanh (W_{h}h_{j}^{S}+u_{\alpha }\alpha _{tj}+W_{s}s_{t-1}+U_{m}[r_{i}\circ m_{j}^{t-1}]), $
where $\circ $ is the element-wise multiplication operator. The update gate $z_{j}$ and the reset gate $r_{j}$ are defined as, respectively, $ z_{j} & = & \sigma (W_{h}^{z}h_{j}^{S}+u_{\alpha }^{z}\alpha _{tj}+W_{s}^{z}s_{t-1}+U_{m}^{z}m_{j}^{t-1}),\\ r_{j} & = & \sigma (W_{h}^{r}h_{j}^{S}+u_{\alpha }^{r}\alpha _{tj}+W_{s}^{r}s_{t-1}+U_{m}^{r}m_{j}^{t-1}), $
where $\sigma $ is a logistic sigmoid function. The coverage vector $m_{j}^{t}$ contains the information about the historical attention focused on $w_{j}^{S}$ , and is helpful for deciding whether $w_{j}^{S}$ should be extracted or not. The alignment model is updated as follows BIBREF25 : $ e_{tj}=a(s_{t-1},h_{j}^{S},m_{j}^{t-1})=v_{a}^{T}\tanh (W_{a}s_{t-1}+U_{a}h_{j}^{S}+V_{a}m_{j}^{t-1}), $
where $V_{a}\in \mathbb {R}^{(N_{h}\times N_{h})}$ .
Gated Dependency Attention
The semantic relationship between candidate words and the previously decoded word is valuable to guide the decoder to select the correct word. We introduce the gated dependency attention mechanism to utilize such guidance.
For a sentence $S$ , we extract the dependency tree using NLP tools such as CoreNLP BIBREF26 for English and LTP BIBREF27 for Chinese, and convert the tree into a graph by adding reversed edges with a revised labels (for example, adding $w_{j}^{S}\xrightarrow{}w_{i}^{S}$ for edge $w_{i}^{S}\xrightarrow{}w_{j}^{S}$ in the dependency tree). Then for each pair of words $(w_{i}^{S},w_{j}^{S})$ , the shortest path with labels $L=[w_{1}^{L},\cdots ,w_{N_{L}}^{L}]$ in the graph is computed and mapped into a sequence of $N_{e}$ -dimensional distributed representation vectors $[l_{1},\cdots ,l_{N_{L}}]$ by the embedding operation. One can employ RNN network to convert this sequence of vectors into a feature vector, but RNN operation is time-consuming. We simply concatenate vectors in short paths ( $N_{L}\le $ 3) into a $3N_{e}$ dimensional vector and feed the vector into a two-layer feed forward neural network to generate an $N_{h}$ -dimensional feature vector $w_{j}^{S}\xrightarrow{}w_{i}^{S}$0 . For long paths with $w_{j}^{S}\xrightarrow{}w_{i}^{S}$1 , $w_{j}^{S}\xrightarrow{}w_{i}^{S}$2 is set to a zero vector. We define dependency attention vector $w_{j}^{S}\xrightarrow{}w_{i}^{S}$3 , where $w_{j}^{S}\xrightarrow{}w_{i}^{S}$4 is the sharpened probability $w_{j}^{S}\xrightarrow{}w_{i}^{S}$5 defined in Equation ( 15 ). If $w_{j}^{S}\xrightarrow{}w_{i}^{S}$6 , $w_{j}^{S}\xrightarrow{}w_{i}^{S}$7 represents the semantic relationship between $w_{j}^{S}\xrightarrow{}w_{i}^{S}$8 and $w_{j}^{S}\xrightarrow{}w_{i}^{S}$9 . If $w_{i}^{S}\xrightarrow{}w_{j}^{S}$0 , then $w_{i}^{S}\xrightarrow{}w_{j}^{S}$1 is close to zero. To correctly guide the decoder, we need to gate $w_{i}^{S}\xrightarrow{}w_{j}^{S}$2 to remember the previous attention vector sometimes (for example, when $w_{i}^{S}\xrightarrow{}w_{j}^{S}$3 is selected), and to forget it sometimes (for example, when a new fact is started). Finally, we define $w_{i}^{S}\xrightarrow{}w_{j}^{S}$4 $w_{i}^{S}\xrightarrow{}w_{j}^{S}$5 ) as the gated dependency attention vector, where $w_{i}^{S}\xrightarrow{}w_{j}^{S}$6 is the GRU gated function, and update the alignment model as follows: $w_{i}^{S}\xrightarrow{}w_{j}^{S}$7
where $D_{a}\in \mathbb {R}^{(N_{h}\times N_{h})}$ .
Post processing
For each sequence generated by Logician, we parse it into a set of facts, remove tuples with illegal format or duplicated tuples. The resultant set is taken as the output of the Logician.
Experimental Design
We first measure the utility of various components in Logician to select the optimal model, and then compare this model to the state-of-the-art methods in four types of information extraction tasks: verb/preposition-based relation, nominal attribute, descriptive phrase and hyponymy relation. The SAOKE data set is split into training set, validating set and testing set with ratios of 80%, 10%, 10%, respectively. For all algorithms involved in the experiments, the training set can be used to train the model, the validating set can be used to select an optimal model, and the testing set is used to evaluate the performance.
For each instance pair $(S,F)$ in the test set, where $S$ is the input sentence and $F$ is the formatted string of ground truth of facts, we parse $F$ into a set of tuples $\mathbb {F}=\lbrace F_{i}\rbrace _{j=1}^{M}$ . Given an open information extraction algorithm, it reads $S$ and produces a set of tuples $\mathbb {G}=\lbrace G_{i}\rbrace _{j=1}^{N}$ . To evaluate how well the $\mathbb {G}$ approximates $\mathbb {F}$ , we need to match each $G_{i}$ to a ground truth fact $S$0 and check whether $S$1 tells the same fact as $S$2 . To conduct the match, we compute the similarity between each predicted fact in $S$3 and each ground truth fact in $S$4 , then find the optimal matching to maximize the sum of matched similarities by solving a linear assignment problem BIBREF28 . In the procedure, the similarity between two facts is defined as $S$5
where $G_{i}(l)$ and $F_{j}(l)$ denote the $l$ -th element of tuple $G_{i}$ and $F_{j}$ respectively, $\mathbf {g}(\cdot ,\cdot )$ denotes the gestalt pattern matching BIBREF29 measure for two strings and $\mathbf {n}(\text{$\cdot $)}$ returns the length of the tuple.
Given a matched pair of $G_{i}$ and $F_{j}$ , we propose an automatic approach to judge whether they tell the same fact. They are judged as telling the same fact if one of the following two conditions is satisfied:
$\mathbf {n}(G_{i})=\mathbf {n}(F_{j})$ , and $\mathbf {g}(G_{i}(l),F_{j}(l))\ge 0.85,l=1,\cdots ,\mathbf {n}(G_{i})$ ;
$\mathbf {n}(G_{i})=\mathbf {n}(F_{j})$ , and $\mathbf {g}(\mathcal {S}(G_{i}),\mathcal {S}(F_{j})\ge 0.85$ ;
where $\mathcal {S}$ is a function formatting a fact into a string by filling the arguments into the placeholders of the predicate.
With the automatic judgment, the precision ( $P$ ), recall ( $R$ ) and $F_{1}$ -score over a test set can be computed. By defining a confidence measure and ordering the facts by their confidences, a precision-recall curve can be drawn to illustrate the overall performance of the algorithm. For Logician, the confidence of a fact is computed as the average of log probabilities over all words in that fact.
Beyond the automatic judgment, human evaluation is also employed. Given an algorithm and the corresponding fact confidence measure, we find a threshold that produces approximately 10% recall (measured by automatic judgment) on the validation set of SAOKE data set. A certain number of sentences (200 for verb/preposition based relation extraction task, and 1000 for other three tasks) are randomly chosen from the testing set of SAOKE data set, and the facts extracted from these sentences are filtered with that threshold. Then we invite three volunteers to manually refine the labeled set of facts for each sentence and vote to decide whether each filtered fact is correctly involved in the sentence. The standard precision, recall and $F_{1}$ -score are reported as the human evaluation results.
For each instance pair $(S,F)$ in the training set of SAOKE data set, we split $S$ and $F$ into words using LTP toolset BIBREF27 , and words appearing in more than 2 sentences are added to the vocabulary. By adding the OOV (out of vocabulary) tag, we finally obtain a vocabulary $V$ with size $N_{V}=65,293$ . The dimension of all embedding vectors is set to $N_{e}=200$ , and the dimension of hidden states is set to $N_{h}=256$ . We use a three-layer bi-directional GRU with dimension 128 to encode $\lbrace x_{i}\rbrace _{i=1}^{N_{S}}$ into hidden states $\lbrace h_{i}^{S}\rbrace _{i=1}^{N_{S}}$ , and a two-layer GRU with hidden-dimension 256 to encode the sequence of $\lbrace y_{j}\rbrace _{j=1}^{N_{F}}$ into hidden states $S$0 . Finally, the Logician network is constructed as stated in Section "Logician" . The Logician is then trained using stochastic gradient descent (SGD) with RMSPROP BIBREF30 strategy for 20 epochs with batch size 10 on the training set of SAOKE data set. The model with best $S$1 -score by automatic judgment on the validation set is selected as the trained model. When the model is trained, given a sentence, we employ the greedy search procedure to produce the fact sequences.
Evaluating Components' Utilities
In this section, we analyze the effects of components involved in Logician: restricted copy, coverage, and gated dependency. Since the restricted copy mechanism is the essential requirement of Logician in order to achieve the goal of literally honest, we take the Logician with only copy mechanism (denoted by $Copy$ ) as the baseline, and analyze the effeteness of coverage mechanism (denoted by $Copy+Coverage$ ), gated dependency mechanism (denoted by $Copy+GatedDep$ ) and both (denoted by $All$ ). Furthermore, there is another option of whether or not to involve shallow semantic information such as POS-tag and NER-tag into the model. For models involving such information, the POS-tag and NER-tag of each word in sentence $S$ are annotated using LTP. For each word in $F$ that is not any keyword in $K$ , the POS-tag and NER-tag are copied from the corresponding original word in $S$ . For each keyword in $K$ , a unique POS-tag and a unique NER-tag are assigned to it. Finally, for each word in $S$ or $Copy+Coverage$0 , the POS-tag and NER-tag are mapped into $Copy+Coverage$1 -dimensional distributed representation vectors and are concatenated into $Copy+Coverage$2 or $Copy+Coverage$3 to attend the training.
All models are trained using the same settings described in above section, and the default output facts (without any confidence filtering) are evaluated by the automatic judgment. The results are reported in Table 4 . From the results, we can see that the model involving all the components and shallow tag information archives the best performance. We use that model to attend the comparisons with existing approaches.
Comparison with Existing Approaches
In the task of extracting verb/preposition based facts, we compare our Logician with the following state-of-the-art Chinese OIE algorithms:
SRLIE: our implementation of SRLIE BIBREF15 for the Chinese language, which first uses LTP tool set to extract the semantic role labels, and converts the results into fact tuples using heuristic rules. The confidence of each fact is computed as the ratio of the number of words in the fact to the number of words in the shortest fragment of source sentence that contains all words in the fact.
ZORE : the Chinese Open Relation Extraction system BIBREF31 , which builds a set of patterns by bootstrapping based on dependency parsing results, and uses the patterns to extract relations. We used the program provided by the author of ZORE system BIBREF31 to generate the extraction results in XML format, and developed an algorithm to transform the facts into n-ary tuples, where auxiliary information extracted by ZORE is removed. The confidence measure for ZORE is the same as that for SRLIE.
SRL $_{\text{SAOKE}}$ : our implementation of the states-of-the-art SRL algorithm proposed in BIBREF32 with modifications to fit OIE tasks. $\text{SRL}_{\text{SAOKE}}$ extracts facts in two steps: (i) Predicate head word detection: detects head word for predicate of each possible fact, where head word of a predicate is the last word in the predicate depending on words outside the predicate in the dependency tree. (ii) Element phrase detection: For each detected head word, detects the subject phrase, predicate phrase and object phrases by tagging the sentence with an extended BIOE tagging scheme, which tags the word neighboring the separation point of the phrase by “M” to cope with the separated phrase. We modify the code provided by the author of BIBREF32 to implement above strategy, and then train a model with the same parameter setting in BIBREF32 on the training set of SAOKE data set. The confidence measure for $\text{SRL}_{\text{SAOKE}}$ is computed as the average of log probabilities over all tags of words in facts. Note that $\text{SRL}_{\text{SAOKE}}$ can extract both verb/preposition based relation and nominal attributes, but in this section, we only evaluate the results of the former type of facts.
The precision-recall curves of Logician and above three comparison algorithms are shown in Figure 1 , and the human evaluation results are shown in the first section of Table 5 .
The state-of-the-art
nominal attribute extraction method is ReNoun BIBREF16 , BIBREF17 . However, it relies on a pre-constructed English attribute schema system BIBREF33 which is not available for Chinese, so it is not an available baseline for Chinese. Since $\text{SRL}_{\text{SAOKE}}$ can extract nominal attributes, we compare Logician with $\text{SRL}_{\text{SAOKE}}$ on this task. The precision-recall curves of Logician and $\text{SRL}_{\text{SAOKE}}$ on the nominal attribute extraction task are shown in Figure 1 , and the human evaluation results are shown in the second section of Table 5 .
Descriptive phrase extraction has been considered in BIBREF18 , in which domain names are required to develop patterns to extract candidates for descriptive phrases, so this method is not applicable to open domain tasks. We develop a baseline algorithm (called Semantic Dependency Description Extractor, SDDE) to extract descriptive phrase. It extracts semantic dependency relation between words using LTP toolset, and for each noun $w_n$ which is the parent of some semantic “Desc” relations, identifies a noun phrase $N$ with $w_n$ as its heading word, assembles a descriptive phrase $D$ containing all words with “Desc” relation to $w_n$ , and finally outputs the fact “( $N$ , $DESC$ , $D$ )”. The confidence of fact in SDDE is computed as the ratio of the number of adverbs and adjectives in $D$ to the number of words in $D$ . The precision-recall curves of Logician and SDDE on the descriptive phrase extraction task are shown in Figure 1 , and the human evaluation results are shown in the third section of Table 5 .
HypeNet BIBREF19 is the state-of-the-art algorithm recommended for hyponymy extraction BIBREF34 , which judges whether hyponymy relation exists between two given words. To make it capable of judging hyponymy relation between two phrases, we replace the word embedding vector component in HypeNet by an LSTM network. Two modified HypeNet models are built using different training data sets: (i) $\text{HypeNet}_{\text{Phrase}}$ : using the pairs of phrases with ISA relation in the training set of SAOKE data set (9,407 pairs after the compact expression expansion); (ii) $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ : besides the training set for $\text{HypeNet}_{\text{Phrase}}$ , adding two Chinese hyponymy data sets (1.4 million pair of words in total in hyponymy relation): Tongyici Cilin (Extended) (CilinE for short) BIBREF27 and cleaned Wikipedia Category data BIBREF35 . In both cases, the sentences from both Chinese Wikipedia pages and training set of SAOKE data set are taken as the background corpus for the HypeNet algorithm. In the testing phase, the trained models are used to predict whether the hyponymy relation exists for each pair of noun phrases/words in sentences of the testing set of SAOKE data set. The confidence of a judgment is the predicted probability of the existence of hyponymy relation. The precision-recall curves of Logician, $\text{HypeNet}_{\text{Phrase}}$ and $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ are shown in Figure 1 , and the human evaluation results in the fourth section of Table 5 .
Results Analysis
The experimental results reveal that, Logician outperforms the comparison methods with large margin in first three tasks. For hyponymy detection tasks, Logician overwhelms the $\text{HypeNet}_{\text{Phrase}}$ using the same training data, and produces comparable results to $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ with much less training data. Table 6 exhibits several example sentences and the facts extracted by these algorithms.
The poor performance of pattern-based methods is plausibly due to the noise in SAOKE data set. The sentences in SAOKE data set are randomly selected from a web encyclopedia, with free and casual writing style, are thus more noisy than the training data of NLP toolset used by these methods. In this situation, the NLP toolset may produce poor results, so do the pattern-based methods.
Models learned from the SAOKE data set archive much better performance. Nevertheless, $\text{SRL}_{\text{SAOKE}}$ extracts each fact without knowing whether a candidate word has been used in other facts, which results in the misleading overlap of the word UTF8gbsn“学” (“Learn” in English) between two facts in the first case of Table 6 . Similarly, $\text{HypeNet}_{\text{Phrase}}$ and $\text{HypeNet}_{\text{Phrase}}^{\text{Extra}}$ focus on the semantic vectors of pairs of phrases and their dependency paths in the background corpus. They extract each fact independently from other facts and hence do not know whether there have been any other relations extracted about these two phrases. In other words, for those comparison methods, an important source of information is neglected and a global optimization for all facts involved in sentences is absent.
On the contrary, Logician performs global optimization over the facts involved in each sentence by the sequence-to-sequence learning paradigm with the help of the coverage mechanism, in which facts compete each other to attract the attention of words, but also cooperate to share words. Valuable information is shared between these multiple tasks, which makes Logician consistently superior to other algorithms in these tasks.
Furthermore, $\text{SRL}_{\text{SAOKE}}$ and $\text{HypeNet}$ methods suffer from the OOV problem, such as unfamiliar words/phrases like the person name and school name in the last case of Table 6 . In this situation they may fail to produce a reasonable result. Logician is able to cope with unfamiliar words/phrases by exploiting the context information using deep RNN network with the help of copy mechanism.
Extraction Error Analysis of Logician
We do a preliminary analysis for the results produced by the Logician model. The most notable problem is that it is unable to recall some facts for long or complex sentences. The last case in Table 6 exhibits such situation, where the fact UTF8gbsn(蔡竞,ISA,经济学博士)((Cai Jing, ISA, Ph. D. in economics) in English) is not recalled. This phenomenon indicates that the coverage mechanism may lose effectiveness in this situation. The second class of error is incomplete extraction, as exhibited in the third case in Table 6 . Due to the incomplete extraction, the left parts may interfere the generation of other facts, and result in nonsense results, which is the third class of error. We believe it is helpful to introduce extra rewards into the learning procedure of Logician to overcome these problems. For example, the reward could be the amount of remaining information left after the fact extraction, or the completeness of extracted facts. Developing such rewards and reinforcement learning algorithms using those rewards to refine Logician belongs to our future works.
Knowledge Expressions
Tuple is the most common knowledge expression format for OIE systems to express n-ary relation between subject and objects. Beyond such information, ClausIE BIBREF36 extracts extra information in the tuples: a complement, and one or more adverbials, and OLLIE BIBREF6 extracts additional context information. SAOKE is able to express n-ary relations, and can be easily extended to support the knowledge extracted by ClausIE, but needs to be redesigned to support context information, which belongs to our future work.
However, there is a fundamental difference between SAOKE and tuples in traditional OIE systems. In traditional OIE systems, knowledge expression is generally not directly related to the extraction algorithm. It is a tool to reorganize the extracted knowledge into a form for further easy reading/storing/computing. However, SAOKE is proposed to act as the direct learning target of the end-to-end Logician model. In such end-to-end framework, knowledge representation is the core of the system, which decides what information would be extracted and how complex the learning algorithm would be. To our knowledge, SAOKE is the first attempt to design a knowledge expression friendly to the end-to-end learning algorithm for OIE tasks. Efforts are still needed to make SAOKE more powerful in order to express more complex knowledge such as events.
Relation Extraction
Relation extraction is the task to identify semantic connections between entities. Major existing relation extraction algorithms can be classified into two classes: closed-domain and open-domain. Closed-domain algorithms are learnt to identify a fixed and finite set of relations, using supervised methods BIBREF37 , BIBREF38 , BIBREF39 , BIBREF40 or weakly supervised methods BIBREF1 , BIBREF41 , while the open-domain algorithms, represented by aforementioned OIE systems, discover open-domain relations without predefined schema. Beyond these two classes, methods like universal schema BIBREF42 are able to learn from both data with fixed and finite set of relations, such as relations in Freebase, and data with open-domain surface relations produced by heuristic patterns or OIE systems.
Logician can be used as an OIE system to extract open-domain relation between entities, and act as sub-systems for knowledge base construction/completion with the help of schema mapping BIBREF43 . Compared with existing OIE systems, which are pattern-based or self-supervised by labeling samples using patterns BIBREF13 , to our knowledge Logician is the first model trained in a supervised end-to-end approach for OIE task, which has exhibited powerful ability in our experiments. There are some neural based end-to-end systems BIBREF39 , BIBREF40 , BIBREF41 proposed for relation extraction, but they all aim to solve the close-domain problem.
However, Logician is not limited to relation extraction task. First, Logician extracts more information beyond relations. Second, Logician focuses on examining how natural languages express facts BIBREF5 , and producing helpful intermediate structures for high level tasks.
Language to Logic
Efforts had been made to map natural language sentences into logical form. Some approaches such as BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 learn the mapping under the supervision of manually labeled logical forms, while others BIBREF48 , BIBREF49 are indirectly supervised by distant information, system rewards, etc. However, all previous works rely on a pre-defined, domain specific logical system, which limits their ability to learn facts out of the pre-defined logical system.
Logician can be viewed as a system that maps language to natural logic, in which the majority of information is expressed by natural phrase. Other than systems mentioned above which aim at execution using the logical form, Logician focuses on understanding how the fact and logic are expressed by natural language. Further mapping to domain-specific logical system or even executor can be built on the basis of Logician's output, and we believe that, with the help of Logician, the work would be easier and the overall performance of the system may be improved.
Facts to Language
The problem of generating sentences from a set of facts has attracted a lot of attentions BIBREF50 , BIBREF51 , BIBREF52 , BIBREF53 . These models focus on facts with a predefined schema from a specific problem domain, such as people biographies and basketball game records, but could not work on open domain. The SAOKE data set provides an opportunity to extend the ability of these models into open domain.
Duality between Knowledge and Language
As mentioned in above sections, the SAOKE data set provides examples of dual mapping between facts and sentences. Duality has been verified to be useful to promote the performance of agents in many NLP tasks, such as back-and-forth translation BIBREF54 , and question-answering BIBREF55 . It is a promising approach to use the duality between knowledge and language to improve the performance of Logician.
Conclusion
In this paper, we consider the open information extraction (OIE) problem for a variety of types of facts in a unified view. Our solution consists of three components: SAOKE format, SAOKE data set, and Logician. SAOKE form is designed to express different types of facts in a unified manner. We publicly release the largest manually labeled data set for OIE tasks in SAOKE form. Using the labeled SAOKE data set, we train an end-to-end neural sequence-to-sequence model, called Logician, to transform sentences in natural language into facts. The experiments reveal the superiority of Logician in various open-domain information extraction tasks to the state-of-the-art algorithms.
Regarding future work, there are at least three promising directions. Firstly, one can investigate knowledge expression methods to extend SAOKE to express more complex knowledge, for tasks such as event extraction. Secondly, one can develop novel learning strategies to improve the performance of Logician and adapt the algorithm to the extended future version of SAOKE. Thirdly, one can extend SAOKE format and Logician algorithm in other languages. | 3,200 sentences |
8816333fbed2bfb1838407df9d6c084ead89751c | 8816333fbed2bfb1838407df9d6c084ead89751c_0 | Q: How is data for RTFM collected?
Text: Introduction
Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments.
Prior work on language grounding and language-based RL (see BIBREF3 for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, or the dynamics of the environment vary and are presented in language for some fixed goal BIBREF9. In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training.
Our contributions are two-fold. First, we propose a grounded policy learning problem that we call (). In , the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate .
Second, we propose to model the joint reasoning problem in . We show that generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM BIBREF10, BIBREF6 both in terms of sample efficiency and final win-rate on . Through curriculum learning where we adapt trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future.
Related Work ::: Language-conditioned policy learning.
A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal.
Related Work ::: Language grounding.
Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics.
We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training.
To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on.
In , the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure FIGREF3 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix SECREF13 for details). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations.
During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer).
In order to win the game (e.g. Figure FIGREF3), the agent must
identify the target team from the goal (e.g. Order of the Forest)
identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx)
identify which monster is in the world (e.g. goblin), and its element (e.g. fire)
identify the modifiers that are effective against this element (e.g. fanatical, shimmering)
find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword)
pick up the correct item (e.g. fanatical sword)
engage the correct monster in combat (e.g. fire goblin).
If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise.
presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand.
We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion.
In addition to the main tasks, we also study a simpler formulation called that has a fixed goal. In , the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure FIGREF49 shows an instance of .
Model
We propose the model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the () layer, which forms the core of our model.
Model ::: () layer
Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning BIBREF10 and instruction following BIBREF6. In , the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure FIGREF12 shows the layer.
We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table TABREF42 in appendix SECREF8. Let $_$ denote a fixed-length $_$-dimensional representation of the text and $_$ the representation of visual inputs with height $H$, width $W$, and $_$ channels. Let $$ denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features:
Unlike FiLM, we additionally modulate text features using visual features:
The output of the layer consists of the sum of the modulated features $$, as well as a max-pooled summary $$ over this sum across spatial dimensions.
Model ::: The model
We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model.
Let $_$ denote word embeddings corresponding to the observations from the environment, where $_[:, :, i, j]$ represents the embeddings corresponding to the $_$-word string that describes the objects in location $(i, j)$ in the grid-world. Let $_$, $_$, and $_$ respectively denote the embeddings corresponding to the $_$-word document, the $_$-word inventory, and the $_$-word goal. We first compute a fixed-length summary $_$ of the the goal using a bidirectional LSTM BIBREF18 followed by self-attention BIBREF19, BIBREF20.
We abbreviate self-attention over the goal as $_= (_)$. We similarly compute a summary of the inventory as $_= (_(_))$. Next, we represent the document encoding conditioned on the goal using dot-product attention BIBREF21.
We abbreviate attention over the document encoding conditioned on the goal summary as $_= {_}{_}$. Next, we build the joint representation of the inputs using successive layers. At each layer, the visual input to the layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $_$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let ${a; b}$ denote the feature-wise concatenation of $a$ and $b$. For the $i$th layer, we have
$_{\text{-}}(_)$ is another encoding of the document similar to $_$, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $^{(0)} = {\sum _j_{, j}; _}$. We max pool a linear transform of the initial visual features to compute the initial visual summary $^{(0)} = (_^{(0)} + _)$. Let $$ denote visual summary of the last layer. We compute the policy $$ and baseline $$ as
where $_{\rm policy}$ and $_{\rm baseline}$ are 2-layer multi-layer perceptrons with $$ activation. We train using an implementation of IMPALA BIBREF22, which decouples actors from learners and uses V-trace for off-policy correction. Please refer to appendix SECREF10 for details.
Experiments
We consider variants of by varying the size of the grid-world ($6\times 6$ vs $10\times 10$), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section SECREF3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information.
We compare to the FiLM model by BIBREF6 and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of . In no_task_attn, the document attention conditioned on the goal utterance ((DISPLAY_FORM26)) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no_vis_attn, we do not attend over the document given the visual output of the previous layer ((DISPLAY_FORM27)), and the document is instead represented through self-attention. In no_text_mod, text modulation using visual features ((DISPLAY_FORM14)) is removed. Please see appendix SECREF9 for model details on our model and baselines, and appendix SECREF10 for training details.
Experiments ::: Comparison to baselines and ablations
We compare to baselines and ablated variants on a simplified variant of in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure FIGREF29 shows that compared to baselines and ablated variants, is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\sim 50$% win rate. Table FIGREF29 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with outperforming FiLM and the CNN model. We find similar results for , its ablated variants, and baselines on other tasks (see appendix SECREF11 for details).
Experiments ::: Curriculum learning for complex environments
Due to the long sequence of co-references the agent must perform in order to solve the full ($10\times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of . We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10\times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table TABREF32 (see Figure FIGREF58 in appendix SECREF12 for training curves of each stage). We see that curriculum learning is crucial to making progress on , and that initial policy training (first row of Table TABREF32) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table TABREF33 shows variants of the last stage of the curriculum in which the model was trained on $6\times 6$ versions of the full and in which the model was trained on $10\times 10$ versions of the full . We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve . This highlights the difficulties of the problem and suggests that there is significant room for improvement in developing better language grounded policy learners.
Experiments ::: Curriculum learning for complex environments ::: Attention maps.
Figure FIGREF34 shows attention conditioned on the goal and on observation summaries produced by intermediate layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in help identify relevant information in the document.
Experiments ::: Curriculum learning for complex environments ::: Analysis of trajectories and failure modes.
We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full . We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials.
Conclusion
We proposed , a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study , we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed , a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, performs well on complex tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans BIBREF23 and induce hierarchical policies BIBREF24, BIBREF25.
Playthrough examples
These figures shows key snapshots from a trained policy on randomly sampled environments.
Variable dimensions
Let $_\in {_}$ denote a fixed-length $_$-dimensional representation of the text and $_\in {_\times H \times W}$ denote the representation of visual inputs with
Model details ::: ::: Hyperparameters.
The used in our experiments consists of 5 consecutive layers, each with 3x3 convolutions and padding and stride sizes of 1. The layers have channels of 16, 32, 64, 64, and 64, with residual connections from the 3rd layer to the 5th layer. The Goal-doc LSTM (see Figure FIGREF18) shares weight with the Goal LSTM. The Inventory and Goal LSTMs have a hidden dimension of size 10, whereas the Vis-doc LSTM has a dimension of 100. We use a word embedding dimension of 30.
Model details ::: CNN with residual connections
Like , the CNN baseline consists of 5 layers of convolutions with channels of 16, 32, 64, 64, and 64. There are residual connections from the 3rd layer to the 5th layer. The input to each layer consists of the output of the previous layer, concatenated with positional features.
The input to the network is the concatenation of the observations $^{(0)}$ and text representations. The text representations consist of self-attention over bidirectional LSTM-encoded goal, document, and inventory. These attention outputs are replicated over the dimensions of the grid and concatenated feature-wise with the observation embeddings in each cell. Figure FIGREF46 illustrates the CNN baseline.
Model details ::: FiLM baseline
The FiLM baseline encodes text in the same fashion as the CNN model. However, instead of using convolutional layers, each layer is a FiLM layer from BIBREF6. Note that in our case, the language representation is a self-attention over the LSTM states instead of a concatenation of terminal LSTM states.
Training procedure
We train using an implementation of IMPALA BIBREF22. In particular, we use 20 actors and a batch size of 24. When unrolling actors, we use a maximum unroll length of 80 frames. Each episode lasts for a maximum of 1000 frames. We optimise using RMSProp BIBREF26 with a learning rate of 0.005, which is annealed linearly for 100 million frames. We set $\alpha = 0.99$ and $\epsilon = 0.01$.
During training, we apply a small negative reward for each time step of $-0.02$ and a discount factor of 0.99 to facilitate convergence. We additionally include a entropy cost to encourage exploration. Let $$ denote the policy. The entropy loss is calculated as
In addition to policy gradient, we add in the entropy loss with a weight of 0.005 and the baseline loss with a weight of 0.5. The baseline loss is computed as the root mean square of the advantages BIBREF22.
When tuning models, we perform a grid search using the training environments to select hyperparameters for each model. We train 5 runs for each configuration in order to report the mean and standard deviation. When transferring, we transfer each of the 5 runs to the new task and once again report the mean and standard deviation.
::: Reading models generalise to new environments.
We split environment dynamics by permuting 3-character dependency graphs from an alphabet, which we randomly split into training and held-out sets. This corresponds to the “permutations” setting in Table TABREF50.
We train models on the $10\times 10$ worlds from the training set and evaluate them on both seen and not seen during training. The left of Figure FIGREF51 shows the performance of models on worlds of varying sizes with training environment dynamics. In this case, the dynamics (e.g. dependency graphs) were seen during training. For $9\times 9$ and $11\times 11$ worlds, the world configuration not seen during training. For $10\times 10$ worlds, there is a 5% chance that the initial frame was seen during training. Figure FIGREF51 shows the performance on held-out environments not seen during training. We see that all models generalise to environments not seen during training, both when the world configuration is not seen (left) and when the environment dynamics are not seen (right).
::: Reading models generalise to new concepts.
In addition to splitting via permutations, we devise two additional ways of splitting environment dynamics by introducing new edges and nodes into the held-out set. Table TABREF50 shows the three different settings. For each, we study the transfer behaviour of models on new environments. Figure FIGREF52 shows the learning curve when training a model on the held-out environments directly and when transferring the model trained on train environments to held-out environments. We observe that all models are significantly more sample-efficient when transferring from training environments, despite the introduction of new edges and new nodes.
::: is more sample-efficient and learns better policies.
In Figure FIGREF51, we see that the FiLM model outperforms the CNN model on both training environment dynamics and held-out environment dynamics. further outperforms FiLM, and does so more consistently in that the final performance has less variance. This behaviour is also observed in the in Figure FIGREF52. When training on the held-out set without transferring, is more sample efficient than FiLM and the CNN model, and achieves higher win-rate. When transferring to the held-out set, remains more sample efficient than the other models.
Language templates
We collect human-written natural language templates for the goal and the dynamics. The goal statements in describe which team the agent should defeat. We collect 12 language templates for goal statements. The document of environment dynamics consists of two types of statements. The first type describes which monsters are assigned to with team. The second type describes which modifiers, which describe items, are effective against which element types, which are associated with monsters. We collection 10 language templates for each type of statements. The entire document is composed from statements, which are randomly shuffled. We randomly sample a template for each statement, which we fill with the monsters and team for the first type and modifiers and element for the second type. | Unanswerable |
37e8f5851133a748c4e3e0beeef0d83883117a98 | 37e8f5851133a748c4e3e0beeef0d83883117a98_0 | Q: How better is performance of proposed model compared to baselines?
Text: Introduction
Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments.
Prior work on language grounding and language-based RL (see BIBREF3 for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, or the dynamics of the environment vary and are presented in language for some fixed goal BIBREF9. In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training.
Our contributions are two-fold. First, we propose a grounded policy learning problem that we call (). In , the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate .
Second, we propose to model the joint reasoning problem in . We show that generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM BIBREF10, BIBREF6 both in terms of sample efficiency and final win-rate on . Through curriculum learning where we adapt trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future.
Related Work ::: Language-conditioned policy learning.
A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal.
Related Work ::: Language grounding.
Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics.
We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training.
To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on.
In , the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure FIGREF3 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix SECREF13 for details). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations.
During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer).
In order to win the game (e.g. Figure FIGREF3), the agent must
identify the target team from the goal (e.g. Order of the Forest)
identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx)
identify which monster is in the world (e.g. goblin), and its element (e.g. fire)
identify the modifiers that are effective against this element (e.g. fanatical, shimmering)
find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword)
pick up the correct item (e.g. fanatical sword)
engage the correct monster in combat (e.g. fire goblin).
If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise.
presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand.
We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion.
In addition to the main tasks, we also study a simpler formulation called that has a fixed goal. In , the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure FIGREF49 shows an instance of .
Model
We propose the model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the () layer, which forms the core of our model.
Model ::: () layer
Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning BIBREF10 and instruction following BIBREF6. In , the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure FIGREF12 shows the layer.
We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table TABREF42 in appendix SECREF8. Let $_$ denote a fixed-length $_$-dimensional representation of the text and $_$ the representation of visual inputs with height $H$, width $W$, and $_$ channels. Let $$ denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features:
Unlike FiLM, we additionally modulate text features using visual features:
The output of the layer consists of the sum of the modulated features $$, as well as a max-pooled summary $$ over this sum across spatial dimensions.
Model ::: The model
We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model.
Let $_$ denote word embeddings corresponding to the observations from the environment, where $_[:, :, i, j]$ represents the embeddings corresponding to the $_$-word string that describes the objects in location $(i, j)$ in the grid-world. Let $_$, $_$, and $_$ respectively denote the embeddings corresponding to the $_$-word document, the $_$-word inventory, and the $_$-word goal. We first compute a fixed-length summary $_$ of the the goal using a bidirectional LSTM BIBREF18 followed by self-attention BIBREF19, BIBREF20.
We abbreviate self-attention over the goal as $_= (_)$. We similarly compute a summary of the inventory as $_= (_(_))$. Next, we represent the document encoding conditioned on the goal using dot-product attention BIBREF21.
We abbreviate attention over the document encoding conditioned on the goal summary as $_= {_}{_}$. Next, we build the joint representation of the inputs using successive layers. At each layer, the visual input to the layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $_$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let ${a; b}$ denote the feature-wise concatenation of $a$ and $b$. For the $i$th layer, we have
$_{\text{-}}(_)$ is another encoding of the document similar to $_$, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $^{(0)} = {\sum _j_{, j}; _}$. We max pool a linear transform of the initial visual features to compute the initial visual summary $^{(0)} = (_^{(0)} + _)$. Let $$ denote visual summary of the last layer. We compute the policy $$ and baseline $$ as
where $_{\rm policy}$ and $_{\rm baseline}$ are 2-layer multi-layer perceptrons with $$ activation. We train using an implementation of IMPALA BIBREF22, which decouples actors from learners and uses V-trace for off-policy correction. Please refer to appendix SECREF10 for details.
Experiments
We consider variants of by varying the size of the grid-world ($6\times 6$ vs $10\times 10$), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section SECREF3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information.
We compare to the FiLM model by BIBREF6 and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of . In no_task_attn, the document attention conditioned on the goal utterance ((DISPLAY_FORM26)) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no_vis_attn, we do not attend over the document given the visual output of the previous layer ((DISPLAY_FORM27)), and the document is instead represented through self-attention. In no_text_mod, text modulation using visual features ((DISPLAY_FORM14)) is removed. Please see appendix SECREF9 for model details on our model and baselines, and appendix SECREF10 for training details.
Experiments ::: Comparison to baselines and ablations
We compare to baselines and ablated variants on a simplified variant of in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure FIGREF29 shows that compared to baselines and ablated variants, is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\sim 50$% win rate. Table FIGREF29 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with outperforming FiLM and the CNN model. We find similar results for , its ablated variants, and baselines on other tasks (see appendix SECREF11 for details).
Experiments ::: Curriculum learning for complex environments
Due to the long sequence of co-references the agent must perform in order to solve the full ($10\times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of . We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10\times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table TABREF32 (see Figure FIGREF58 in appendix SECREF12 for training curves of each stage). We see that curriculum learning is crucial to making progress on , and that initial policy training (first row of Table TABREF32) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table TABREF33 shows variants of the last stage of the curriculum in which the model was trained on $6\times 6$ versions of the full and in which the model was trained on $10\times 10$ versions of the full . We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve . This highlights the difficulties of the problem and suggests that there is significant room for improvement in developing better language grounded policy learners.
Experiments ::: Curriculum learning for complex environments ::: Attention maps.
Figure FIGREF34 shows attention conditioned on the goal and on observation summaries produced by intermediate layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in help identify relevant information in the document.
Experiments ::: Curriculum learning for complex environments ::: Analysis of trajectories and failure modes.
We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full . We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials.
Conclusion
We proposed , a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study , we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed , a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, performs well on complex tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans BIBREF23 and induce hierarchical policies BIBREF24, BIBREF25.
Playthrough examples
These figures shows key snapshots from a trained policy on randomly sampled environments.
Variable dimensions
Let $_\in {_}$ denote a fixed-length $_$-dimensional representation of the text and $_\in {_\times H \times W}$ denote the representation of visual inputs with
Model details ::: ::: Hyperparameters.
The used in our experiments consists of 5 consecutive layers, each with 3x3 convolutions and padding and stride sizes of 1. The layers have channels of 16, 32, 64, 64, and 64, with residual connections from the 3rd layer to the 5th layer. The Goal-doc LSTM (see Figure FIGREF18) shares weight with the Goal LSTM. The Inventory and Goal LSTMs have a hidden dimension of size 10, whereas the Vis-doc LSTM has a dimension of 100. We use a word embedding dimension of 30.
Model details ::: CNN with residual connections
Like , the CNN baseline consists of 5 layers of convolutions with channels of 16, 32, 64, 64, and 64. There are residual connections from the 3rd layer to the 5th layer. The input to each layer consists of the output of the previous layer, concatenated with positional features.
The input to the network is the concatenation of the observations $^{(0)}$ and text representations. The text representations consist of self-attention over bidirectional LSTM-encoded goal, document, and inventory. These attention outputs are replicated over the dimensions of the grid and concatenated feature-wise with the observation embeddings in each cell. Figure FIGREF46 illustrates the CNN baseline.
Model details ::: FiLM baseline
The FiLM baseline encodes text in the same fashion as the CNN model. However, instead of using convolutional layers, each layer is a FiLM layer from BIBREF6. Note that in our case, the language representation is a self-attention over the LSTM states instead of a concatenation of terminal LSTM states.
Training procedure
We train using an implementation of IMPALA BIBREF22. In particular, we use 20 actors and a batch size of 24. When unrolling actors, we use a maximum unroll length of 80 frames. Each episode lasts for a maximum of 1000 frames. We optimise using RMSProp BIBREF26 with a learning rate of 0.005, which is annealed linearly for 100 million frames. We set $\alpha = 0.99$ and $\epsilon = 0.01$.
During training, we apply a small negative reward for each time step of $-0.02$ and a discount factor of 0.99 to facilitate convergence. We additionally include a entropy cost to encourage exploration. Let $$ denote the policy. The entropy loss is calculated as
In addition to policy gradient, we add in the entropy loss with a weight of 0.005 and the baseline loss with a weight of 0.5. The baseline loss is computed as the root mean square of the advantages BIBREF22.
When tuning models, we perform a grid search using the training environments to select hyperparameters for each model. We train 5 runs for each configuration in order to report the mean and standard deviation. When transferring, we transfer each of the 5 runs to the new task and once again report the mean and standard deviation.
::: Reading models generalise to new environments.
We split environment dynamics by permuting 3-character dependency graphs from an alphabet, which we randomly split into training and held-out sets. This corresponds to the “permutations” setting in Table TABREF50.
We train models on the $10\times 10$ worlds from the training set and evaluate them on both seen and not seen during training. The left of Figure FIGREF51 shows the performance of models on worlds of varying sizes with training environment dynamics. In this case, the dynamics (e.g. dependency graphs) were seen during training. For $9\times 9$ and $11\times 11$ worlds, the world configuration not seen during training. For $10\times 10$ worlds, there is a 5% chance that the initial frame was seen during training. Figure FIGREF51 shows the performance on held-out environments not seen during training. We see that all models generalise to environments not seen during training, both when the world configuration is not seen (left) and when the environment dynamics are not seen (right).
::: Reading models generalise to new concepts.
In addition to splitting via permutations, we devise two additional ways of splitting environment dynamics by introducing new edges and nodes into the held-out set. Table TABREF50 shows the three different settings. For each, we study the transfer behaviour of models on new environments. Figure FIGREF52 shows the learning curve when training a model on the held-out environments directly and when transferring the model trained on train environments to held-out environments. We observe that all models are significantly more sample-efficient when transferring from training environments, despite the introduction of new edges and new nodes.
::: is more sample-efficient and learns better policies.
In Figure FIGREF51, we see that the FiLM model outperforms the CNN model on both training environment dynamics and held-out environment dynamics. further outperforms FiLM, and does so more consistently in that the final performance has less variance. This behaviour is also observed in the in Figure FIGREF52. When training on the held-out set without transferring, is more sample efficient than FiLM and the CNN model, and achieves higher win-rate. When transferring to the held-out set, remains more sample efficient than the other models.
Language templates
We collect human-written natural language templates for the goal and the dynamics. The goal statements in describe which team the agent should defeat. We collect 12 language templates for goal statements. The document of environment dynamics consists of two types of statements. The first type describes which monsters are assigned to with team. The second type describes which modifiers, which describe items, are effective against which element types, which are associated with monsters. We collection 10 language templates for each type of statements. The entire document is composed from statements, which are randomly shuffled. We randomly sample a template for each statement, which we fill with the monsters and team for the first type and modifiers and element for the second type. | Proposed model achive 66+-22 win rate, baseline CNN 13+-1 and baseline FiLM 32+-3 . |
c9e9c5f443649593632656a5934026ad8ccc1712 | c9e9c5f443649593632656a5934026ad8ccc1712_0 | Q: How does propose model model that capture three-way interactions?
Text: Introduction
Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments.
Prior work on language grounding and language-based RL (see BIBREF3 for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, or the dynamics of the environment vary and are presented in language for some fixed goal BIBREF9. In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training.
Our contributions are two-fold. First, we propose a grounded policy learning problem that we call (). In , the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate .
Second, we propose to model the joint reasoning problem in . We show that generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM BIBREF10, BIBREF6 both in terms of sample efficiency and final win-rate on . Through curriculum learning where we adapt trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future.
Related Work ::: Language-conditioned policy learning.
A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal.
Related Work ::: Language grounding.
Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics.
We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training.
To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on.
In , the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure FIGREF3 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix SECREF13 for details). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations.
During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer).
In order to win the game (e.g. Figure FIGREF3), the agent must
identify the target team from the goal (e.g. Order of the Forest)
identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx)
identify which monster is in the world (e.g. goblin), and its element (e.g. fire)
identify the modifiers that are effective against this element (e.g. fanatical, shimmering)
find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword)
pick up the correct item (e.g. fanatical sword)
engage the correct monster in combat (e.g. fire goblin).
If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise.
presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand.
We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion.
In addition to the main tasks, we also study a simpler formulation called that has a fixed goal. In , the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure FIGREF49 shows an instance of .
Model
We propose the model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the () layer, which forms the core of our model.
Model ::: () layer
Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning BIBREF10 and instruction following BIBREF6. In , the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure FIGREF12 shows the layer.
We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table TABREF42 in appendix SECREF8. Let $_$ denote a fixed-length $_$-dimensional representation of the text and $_$ the representation of visual inputs with height $H$, width $W$, and $_$ channels. Let $$ denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features:
Unlike FiLM, we additionally modulate text features using visual features:
The output of the layer consists of the sum of the modulated features $$, as well as a max-pooled summary $$ over this sum across spatial dimensions.
Model ::: The model
We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model.
Let $_$ denote word embeddings corresponding to the observations from the environment, where $_[:, :, i, j]$ represents the embeddings corresponding to the $_$-word string that describes the objects in location $(i, j)$ in the grid-world. Let $_$, $_$, and $_$ respectively denote the embeddings corresponding to the $_$-word document, the $_$-word inventory, and the $_$-word goal. We first compute a fixed-length summary $_$ of the the goal using a bidirectional LSTM BIBREF18 followed by self-attention BIBREF19, BIBREF20.
We abbreviate self-attention over the goal as $_= (_)$. We similarly compute a summary of the inventory as $_= (_(_))$. Next, we represent the document encoding conditioned on the goal using dot-product attention BIBREF21.
We abbreviate attention over the document encoding conditioned on the goal summary as $_= {_}{_}$. Next, we build the joint representation of the inputs using successive layers. At each layer, the visual input to the layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $_$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let ${a; b}$ denote the feature-wise concatenation of $a$ and $b$. For the $i$th layer, we have
$_{\text{-}}(_)$ is another encoding of the document similar to $_$, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $^{(0)} = {\sum _j_{, j}; _}$. We max pool a linear transform of the initial visual features to compute the initial visual summary $^{(0)} = (_^{(0)} + _)$. Let $$ denote visual summary of the last layer. We compute the policy $$ and baseline $$ as
where $_{\rm policy}$ and $_{\rm baseline}$ are 2-layer multi-layer perceptrons with $$ activation. We train using an implementation of IMPALA BIBREF22, which decouples actors from learners and uses V-trace for off-policy correction. Please refer to appendix SECREF10 for details.
Experiments
We consider variants of by varying the size of the grid-world ($6\times 6$ vs $10\times 10$), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section SECREF3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information.
We compare to the FiLM model by BIBREF6 and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of . In no_task_attn, the document attention conditioned on the goal utterance ((DISPLAY_FORM26)) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no_vis_attn, we do not attend over the document given the visual output of the previous layer ((DISPLAY_FORM27)), and the document is instead represented through self-attention. In no_text_mod, text modulation using visual features ((DISPLAY_FORM14)) is removed. Please see appendix SECREF9 for model details on our model and baselines, and appendix SECREF10 for training details.
Experiments ::: Comparison to baselines and ablations
We compare to baselines and ablated variants on a simplified variant of in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure FIGREF29 shows that compared to baselines and ablated variants, is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\sim 50$% win rate. Table FIGREF29 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with outperforming FiLM and the CNN model. We find similar results for , its ablated variants, and baselines on other tasks (see appendix SECREF11 for details).
Experiments ::: Curriculum learning for complex environments
Due to the long sequence of co-references the agent must perform in order to solve the full ($10\times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of . We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10\times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table TABREF32 (see Figure FIGREF58 in appendix SECREF12 for training curves of each stage). We see that curriculum learning is crucial to making progress on , and that initial policy training (first row of Table TABREF32) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table TABREF33 shows variants of the last stage of the curriculum in which the model was trained on $6\times 6$ versions of the full and in which the model was trained on $10\times 10$ versions of the full . We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve . This highlights the difficulties of the problem and suggests that there is significant room for improvement in developing better language grounded policy learners.
Experiments ::: Curriculum learning for complex environments ::: Attention maps.
Figure FIGREF34 shows attention conditioned on the goal and on observation summaries produced by intermediate layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in help identify relevant information in the document.
Experiments ::: Curriculum learning for complex environments ::: Analysis of trajectories and failure modes.
We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full . We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials.
Conclusion
We proposed , a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study , we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed , a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, performs well on complex tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans BIBREF23 and induce hierarchical policies BIBREF24, BIBREF25.
Playthrough examples
These figures shows key snapshots from a trained policy on randomly sampled environments.
Variable dimensions
Let $_\in {_}$ denote a fixed-length $_$-dimensional representation of the text and $_\in {_\times H \times W}$ denote the representation of visual inputs with
Model details ::: ::: Hyperparameters.
The used in our experiments consists of 5 consecutive layers, each with 3x3 convolutions and padding and stride sizes of 1. The layers have channels of 16, 32, 64, 64, and 64, with residual connections from the 3rd layer to the 5th layer. The Goal-doc LSTM (see Figure FIGREF18) shares weight with the Goal LSTM. The Inventory and Goal LSTMs have a hidden dimension of size 10, whereas the Vis-doc LSTM has a dimension of 100. We use a word embedding dimension of 30.
Model details ::: CNN with residual connections
Like , the CNN baseline consists of 5 layers of convolutions with channels of 16, 32, 64, 64, and 64. There are residual connections from the 3rd layer to the 5th layer. The input to each layer consists of the output of the previous layer, concatenated with positional features.
The input to the network is the concatenation of the observations $^{(0)}$ and text representations. The text representations consist of self-attention over bidirectional LSTM-encoded goal, document, and inventory. These attention outputs are replicated over the dimensions of the grid and concatenated feature-wise with the observation embeddings in each cell. Figure FIGREF46 illustrates the CNN baseline.
Model details ::: FiLM baseline
The FiLM baseline encodes text in the same fashion as the CNN model. However, instead of using convolutional layers, each layer is a FiLM layer from BIBREF6. Note that in our case, the language representation is a self-attention over the LSTM states instead of a concatenation of terminal LSTM states.
Training procedure
We train using an implementation of IMPALA BIBREF22. In particular, we use 20 actors and a batch size of 24. When unrolling actors, we use a maximum unroll length of 80 frames. Each episode lasts for a maximum of 1000 frames. We optimise using RMSProp BIBREF26 with a learning rate of 0.005, which is annealed linearly for 100 million frames. We set $\alpha = 0.99$ and $\epsilon = 0.01$.
During training, we apply a small negative reward for each time step of $-0.02$ and a discount factor of 0.99 to facilitate convergence. We additionally include a entropy cost to encourage exploration. Let $$ denote the policy. The entropy loss is calculated as
In addition to policy gradient, we add in the entropy loss with a weight of 0.005 and the baseline loss with a weight of 0.5. The baseline loss is computed as the root mean square of the advantages BIBREF22.
When tuning models, we perform a grid search using the training environments to select hyperparameters for each model. We train 5 runs for each configuration in order to report the mean and standard deviation. When transferring, we transfer each of the 5 runs to the new task and once again report the mean and standard deviation.
::: Reading models generalise to new environments.
We split environment dynamics by permuting 3-character dependency graphs from an alphabet, which we randomly split into training and held-out sets. This corresponds to the “permutations” setting in Table TABREF50.
We train models on the $10\times 10$ worlds from the training set and evaluate them on both seen and not seen during training. The left of Figure FIGREF51 shows the performance of models on worlds of varying sizes with training environment dynamics. In this case, the dynamics (e.g. dependency graphs) were seen during training. For $9\times 9$ and $11\times 11$ worlds, the world configuration not seen during training. For $10\times 10$ worlds, there is a 5% chance that the initial frame was seen during training. Figure FIGREF51 shows the performance on held-out environments not seen during training. We see that all models generalise to environments not seen during training, both when the world configuration is not seen (left) and when the environment dynamics are not seen (right).
::: Reading models generalise to new concepts.
In addition to splitting via permutations, we devise two additional ways of splitting environment dynamics by introducing new edges and nodes into the held-out set. Table TABREF50 shows the three different settings. For each, we study the transfer behaviour of models on new environments. Figure FIGREF52 shows the learning curve when training a model on the held-out environments directly and when transferring the model trained on train environments to held-out environments. We observe that all models are significantly more sample-efficient when transferring from training environments, despite the introduction of new edges and new nodes.
::: is more sample-efficient and learns better policies.
In Figure FIGREF51, we see that the FiLM model outperforms the CNN model on both training environment dynamics and held-out environment dynamics. further outperforms FiLM, and does so more consistently in that the final performance has less variance. This behaviour is also observed in the in Figure FIGREF52. When training on the held-out set without transferring, is more sample efficient than FiLM and the CNN model, and achieves higher win-rate. When transferring to the held-out set, remains more sample efficient than the other models.
Language templates
We collect human-written natural language templates for the goal and the dynamics. The goal statements in describe which team the agent should defeat. We collect 12 language templates for goal statements. The document of environment dynamics consists of two types of statements. The first type describes which monsters are assigned to with team. The second type describes which modifiers, which describe items, are effective against which element types, which are associated with monsters. We collection 10 language templates for each type of statements. The entire document is composed from statements, which are randomly shuffled. We randomly sample a template for each statement, which we fill with the monsters and team for the first type and modifiers and element for the second type. | We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. |
4d844c9453203069363173243e409698782bac3f | 4d844c9453203069363173243e409698782bac3f_0 | Q: Do transferring hurt the performance is the corpora are not related?
Text: INTRODUCTION
Transferring knowledge between related domains could help to improve a learner on a special domain. Moreover, gathering data from different related resources proves very time-consuming and expensive. Therefore, this necessitates the development of machine learning methods to extract knowledge from these resources with different properties and characteristics. Transfer learning is a field in which machine learning methods encounter this challenge.
Recently, deep neural networks outperform many machine learning methods in many real-world machine learning applications, especially NLP tasks. Consequently, deep learning methods have attracted much attention for question answering problems, and many methods based on deep learning algorithms have been proposed in recent years. Deep neural networks like other machine learning algorithms have some shortcomings. One of the main drawbacks of deep networks is their huge number of parameters which must be tuned during training. Consequently, deep neural networks require a huge amount of training samples to finely train and to prevent over-training. Also, using available and relevant data could improve their efficiency.
In addition, although deep learning methods are well-studied for question answering tasks in recent years, there is no comprehensive research for transfer learning through deep neural networks in the question answering field. Deep networks are shallow for NLP applications compared to the computer vision tasks. Consequently, many ways to perform transfer learning in computer vision area are inapplicable to NLP tasks.
Mou at el. [6] conducted a comprehensive study on the transferability of the deep networks in sentence classification. They concluded that the best time to transfer the knowledge is when the tasks are semantically related because words are more abstract entities and convey more semantics than pixels in images. Moreover, fine-tuning the weights is more effective than freezing the weights during learning of the target dataset. Finally, MULT and INIT are generally comparable, and Mou at el. did not observe any improvement by combining the two methods. They argue that the MULT method needs more in-depth analysis in future work. Because of these challenges and the question answering gap not covered specifically, this paper performs transfer learning in question answering problems. Also, this paper follows Mou at el.'s recommendations and improves MULT using Intelligent Sample Selection (ISS) process. In the ISS, the most relevant samples to the source data is selected to transfer the knowledge.
Related Works
Transfer learning has a long history, and many researchers try to utilize the knowledge from relevant datasets to improve the performance on the target dataset [9, 7]. Moreover, transfer learning has been well-studied for deep learning methods in computer vision. Similarity between datasets is a key factor for performing transfer learning. Based on the similarity between two domains, different methods could be used. Bengio et. al [11] examine the ability of the neural networks in transferring knowledge from different domain.
Transfer learning through deep networks is very restricted in NLP than Computer vision because the words naturally are high level entities, differing from the low level signals which create an image. However, there are some studies in the NLP realm which are restricted to sentence classification [6] and discourse relation classification [3]. Collobert and Weston [1] proposed a method in which they chose a random sample from both source and target domains with a $\lambda $ and $(1-\lambda )$ probability respectively, and subsequently the computed gradient is backpropagated through the network. Moreover, in Question answering, transfer learning through deep networks has not been well-studied because the networks are not deep like computer vision networks.
Datasets
Five different datasets (SQuAD, SelQA , WikiQA, WikiQA and InforbaxQA) are used for evaluation of the INIT, MULT and ISS-MULT methods. These datasets were proposed in recent years for the question answering problems. These datasets are produced differently. Therefore, they are may not be semanticaly related datasets, and this feature plays an important role in transfer learning in NLP tasks.
SQuAD: This dataset contains more than 107K crowdsourced questions on 536 Wikipedia article. SQuAD uses the entire candidate articles from Wikipedia to find the candidate answers. Also, the hit ratio for correct answers in SQuAD is about 35 percent [8].
SelQA: This dataset is contains 8K+ questions in which more than half of the questions are paraphrased from the first half. In this dataset, like SQuAd and WikiQA, the entire articles from Wikipedia is searched to answer the questions [2].
WikiQA: This dataset includes questions selected from the Bing search queries. The abstract of articles in Wikipedia is used to find the answer candidates and the hit rate is about 36 percent. This dataset contains 20K+ question and answer. WikiQA differes from SQuAD and SelQA in its use of the abstract page of each article to find the candidate answers [10].
WikiQA: This dataset contains 9K+ question and answer pairs from original WikiQA. In WikiQA, unlike wikiQA, entire WikiPedia pages are search for candidate answers. Moreover, the hit rate (13%) is much lower than the original WikiQA; therefore, this dataset is much more challenging than the WikiQA [13].
InfoboxQA: This dataset contains more than 15K questions extracted from 150 articles in Wikipedia. The answers lack complete sentence structure. Therefore, the nature of this dataset is differs from the other datasets [5].
Methods
sd First, before mathematically discussing transfer learning, presenting a proper definition of this term is crucial. Generally, transfer learning is a process in which the knowledge from a special domain is transferred to a new domain with different properties of feature space and data distribution. This knowledge may be captured from learning a model which has already been trained on source data or through some mathematical mapping between two different domains.
To mathematically define transfer learning we abide by the following notation [7]. Suppose each domain of D consists of two main parts:
For this domain, there exists a task named $T$ which also includes two main parts. The first part is the label space that refers to the label of the data $\gamma $ , and the second part is the predictive function $f(X)$ . Moreover, this function has been learned during learning process on the domain $D$ . To summarize, for each transfer learning task, there is a domain $D = \lbrace \chi , P(X) \rbrace $ and a task $T = \lbrace \gamma , f(X) \rbrace $ . Therefore, for transfer learning, it is a necessity to have at least two different domains and tasks. To this aim, consider $D_S$ as the source domain data where $D_S = \lbrace \chi _S,P(X_S) \rbrace $ . In the same way, $D_T$ is defined as the target domain data where $D_T = \lbrace \chi _T,P(X_T) \rbrace $ . Lastly, the source task is notated as $\gamma $0 , and the target task as $\gamma $1 .
INIT Method
Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable.
Multi-Task Learning (MULT)
In the MULT method, two datasets are simultaneously trained, and the weights are tuned based on the inputs which come from both datasets. The hyper-parameter $\lambda \in (0,1)$ is calculated based on a brute-force search or using general global search. This hyper parameter is used to calculate the final cost function which is computed from the combination of the cost function of the source dataset and the target datasets. The final cost function is shown in Eq. 1.
$Cost = \lambda \times Cost(S) + (1-\lambda ) \times Cos(T)$ (1)
where $S$ and $T$ are the source and target datasets respectively. One straightforward way to compute the cost function is randomly select the samples from both datasets with the probability of $\lambda $ and then compute the cost function for each sample. This parameter is used to randomly select a sample from a special dataset during the training.
The way that INIT and MULT method transfer knowledge varies drastically. In INIT, a initial point for optimization process is estimated instead of a random point selection. This initialization is very tricky for gradient based methods; however, sometimes this does not work in complex feature spaces of the problem. On the other hand, in MULT the samples from two dataset simultaneously affect the optimization process in which the source dataset behaves like a regularizer and potentially prevents the network from overfitting on the data.
ISS-MULT
The MULT method needs more research in NLP [6]. One possible improvement of this method is to automatically tune the hyperparameter of lambda during training in which a unique and proper lambda has been calculated for a special dataset. Although it seems that the estimation of a proper lambda is not a trivial work, there are some restrictions which help users to choose a proper lambda. First of all, the range of the lambda is predetermined and is between 0.85 and 1 because the network needs to see more data from the target dataset in each epoch. In addition, the lambda's behavior on mean average precision (MAP), mean reciprocal rank (MRR) and F1-score is very complex. In other words, there are many local optima in this multi-objective optimization problem. Moreover, there is not much difference between global optimum and the other local optima in this range.
Another way to improve this method could be to select the samples which are more relevant to the target dataset. Based on the importance of the similarity between the datasets for transfer learning in the NLP tasks, this paper proposes to use the most relevant samples from the source dataset to train on the target dataset. One way to find the most similar samples is finding the pair-wise distance between all samples of the development set of the target dataset and source dataset.
This idea encounters two main problems. First, in our experiments the source dataset is a huge dataset like SQuAD with more than 107K samples. Second, comparing two questiona and answer pairs using the cosine similarity will not be a fast task, especially when each word is represented in a vector of length 300.
To solve this problem, we propose using a clustering algorithm on the development set. The clustering algorithm used ihere is a hierarchical clustering algorithm. The cosine similarity is used as a criteria to cluster each question and answer. Therefore, these clusters are representative of the development set of the target dataset and the corresponding center for each cluster is representative of all the samples on that cluster. In the next step, the distance of each center is used to calculate the cosine similarity. Finally, the samples in the source dataset which are far from these centers are ignored. In other words, the outliers do not take part in transfer learning.
Experiments
For evaluation of INIT, MULT and ISS-MULT method, the bigram CNN introduced by Yu et al. [12] is used. This model consists of two 2D-convolution layers and each convolution layer is followed by a pooling layer. GoogleNews-word2vec is used as a pre-trained embedding layer to represent each word. Forty different filters of size $(2 \times embedding size (300))$ are used to create the feature spaces for each layer. Then, a logistic regression is used to predict the final result.
In this paper, two main question answering tasks such as answer selection and answer triggering have been examined. In the answer triggering task, there is not a guarantee to have the correct answer among the list of answers. However, in answer selection, there is at least one correct answer among the candidates. As a result, answer triggering is a more challenging task. To report the result for answer selection, MAP and MRR are used; however, the answer triggering task is evaluated by F1-score. The result for MULT Method is reported in Table. 1.
The results have shown that the MULT method could properly improve all metrics for both answer selection and answer triggering task. Moreover, this improvement is more significant in answer triggering. The result for InforboxQA dataset shows that this dataset deteriorate all metrics because the nature of this dataset is different from the other datasets. Moreover, the improvement on SQuAD is not significant because this dataset is a big dataset and already contains enough data to fine-tune the parameters of the deep network.
In other experiment, the INIT method is implemented. In the INIT method, the weights for the best results on the development set of the source dataset is used for initialization of the deep network. The results for the INIT method is listed in Table 2. Moreover, The results for ISS-MULT method is shown in Table 3.
The results indicate that the ISS-MULT method could improve all metrics on both the WikiQA and SelQA datasets. This improvement is more obvious in answer triggering task.
We also performe another experiment to examine INIT and MULT method for original WikiQA. The F1-score for this dataset is equal to 33.73; however, the average INIT result for both SQuAD and SelQA as initializers is 30.50. In addition, the average results for MULT and ISS-MULT are 31.76 and 32.65, respectively. The result on original WikiQA indicates that all three transfer learning methods not only do not improve the results but also hurt the F1-score. Therefore, SelQA and SQuAD could not estimate a proper initial point for gradient based optimization method. Moreover, these corpora could not refine the error surface of the original WikiQA dataset during optimization for MULT and ISS-MULT method.
These are because other datasets could not add new information to the original dataset or they apparently add some redundant information which are dissimilar to the target dataset. Although ISS-MULT tries to remove this effect and consequently the result is improved, this method is on top of MULT method, and the result is significantly based on the effectiveness of this method.
According to Section 3, SQuAD, SelQA and WikiQA are created based on the same policy from entire Wikipedia pages; however, the original WikiQA is just created based on the abstract of each page. Therefore, the original WikiQA dataset is different from WikiQA, SelQA and SQuAD most likely because INIT and MULT method do not work for original WikiQA based on available corpora.
All in all, indicated Table. 1, 2 and 3, the INIT method slightly generates better results for the answer selection task compared to MULT and ISS-MULT . Moreover, all of the three methods improve the MAP and MRR metrics compared to the base method. On the other hand, MULT and ISS-MULT produce much better results for answer triggering task than the INIT method. In this task, all three methods outperform the base method. Moreover, according to our experiments, using different policies to generate different datasets could intensely affect transfer learning for the answer triggering task.
Conclusions
In this paper, we presented a comprehensive experiment for two main methods in deep learning on five recent corpora for question answering and triggering tasks. A new method on the top of MULT named ISS-MULT is presented. The results show that transfer learning could generally improve the results and this improvement is larger in the answer triggering task. According to the results, we reach the conclusion that transfer learning works best on semantically related corpora but also works well on datasets created similarly.
Acknowledgement
The authors would like to thank Derek Onken, Massimiliano Lupo Pasini and Tomasz Jurczyk for their help in providing datasets and their guidance in this paper. | Yes |
5633d93ef356aca02592bae3dfc1b3ec8fce27dc | 5633d93ef356aca02592bae3dfc1b3ec8fce27dc_0 | Q: Is accuracy the only metric they used to compare systems?
Text: INTRODUCTION
Transferring knowledge between related domains could help to improve a learner on a special domain. Moreover, gathering data from different related resources proves very time-consuming and expensive. Therefore, this necessitates the development of machine learning methods to extract knowledge from these resources with different properties and characteristics. Transfer learning is a field in which machine learning methods encounter this challenge.
Recently, deep neural networks outperform many machine learning methods in many real-world machine learning applications, especially NLP tasks. Consequently, deep learning methods have attracted much attention for question answering problems, and many methods based on deep learning algorithms have been proposed in recent years. Deep neural networks like other machine learning algorithms have some shortcomings. One of the main drawbacks of deep networks is their huge number of parameters which must be tuned during training. Consequently, deep neural networks require a huge amount of training samples to finely train and to prevent over-training. Also, using available and relevant data could improve their efficiency.
In addition, although deep learning methods are well-studied for question answering tasks in recent years, there is no comprehensive research for transfer learning through deep neural networks in the question answering field. Deep networks are shallow for NLP applications compared to the computer vision tasks. Consequently, many ways to perform transfer learning in computer vision area are inapplicable to NLP tasks.
Mou at el. [6] conducted a comprehensive study on the transferability of the deep networks in sentence classification. They concluded that the best time to transfer the knowledge is when the tasks are semantically related because words are more abstract entities and convey more semantics than pixels in images. Moreover, fine-tuning the weights is more effective than freezing the weights during learning of the target dataset. Finally, MULT and INIT are generally comparable, and Mou at el. did not observe any improvement by combining the two methods. They argue that the MULT method needs more in-depth analysis in future work. Because of these challenges and the question answering gap not covered specifically, this paper performs transfer learning in question answering problems. Also, this paper follows Mou at el.'s recommendations and improves MULT using Intelligent Sample Selection (ISS) process. In the ISS, the most relevant samples to the source data is selected to transfer the knowledge.
Related Works
Transfer learning has a long history, and many researchers try to utilize the knowledge from relevant datasets to improve the performance on the target dataset [9, 7]. Moreover, transfer learning has been well-studied for deep learning methods in computer vision. Similarity between datasets is a key factor for performing transfer learning. Based on the similarity between two domains, different methods could be used. Bengio et. al [11] examine the ability of the neural networks in transferring knowledge from different domain.
Transfer learning through deep networks is very restricted in NLP than Computer vision because the words naturally are high level entities, differing from the low level signals which create an image. However, there are some studies in the NLP realm which are restricted to sentence classification [6] and discourse relation classification [3]. Collobert and Weston [1] proposed a method in which they chose a random sample from both source and target domains with a $\lambda $ and $(1-\lambda )$ probability respectively, and subsequently the computed gradient is backpropagated through the network. Moreover, in Question answering, transfer learning through deep networks has not been well-studied because the networks are not deep like computer vision networks.
Datasets
Five different datasets (SQuAD, SelQA , WikiQA, WikiQA and InforbaxQA) are used for evaluation of the INIT, MULT and ISS-MULT methods. These datasets were proposed in recent years for the question answering problems. These datasets are produced differently. Therefore, they are may not be semanticaly related datasets, and this feature plays an important role in transfer learning in NLP tasks.
SQuAD: This dataset contains more than 107K crowdsourced questions on 536 Wikipedia article. SQuAD uses the entire candidate articles from Wikipedia to find the candidate answers. Also, the hit ratio for correct answers in SQuAD is about 35 percent [8].
SelQA: This dataset is contains 8K+ questions in which more than half of the questions are paraphrased from the first half. In this dataset, like SQuAd and WikiQA, the entire articles from Wikipedia is searched to answer the questions [2].
WikiQA: This dataset includes questions selected from the Bing search queries. The abstract of articles in Wikipedia is used to find the answer candidates and the hit rate is about 36 percent. This dataset contains 20K+ question and answer. WikiQA differes from SQuAD and SelQA in its use of the abstract page of each article to find the candidate answers [10].
WikiQA: This dataset contains 9K+ question and answer pairs from original WikiQA. In WikiQA, unlike wikiQA, entire WikiPedia pages are search for candidate answers. Moreover, the hit rate (13%) is much lower than the original WikiQA; therefore, this dataset is much more challenging than the WikiQA [13].
InfoboxQA: This dataset contains more than 15K questions extracted from 150 articles in Wikipedia. The answers lack complete sentence structure. Therefore, the nature of this dataset is differs from the other datasets [5].
Methods
sd First, before mathematically discussing transfer learning, presenting a proper definition of this term is crucial. Generally, transfer learning is a process in which the knowledge from a special domain is transferred to a new domain with different properties of feature space and data distribution. This knowledge may be captured from learning a model which has already been trained on source data or through some mathematical mapping between two different domains.
To mathematically define transfer learning we abide by the following notation [7]. Suppose each domain of D consists of two main parts:
For this domain, there exists a task named $T$ which also includes two main parts. The first part is the label space that refers to the label of the data $\gamma $ , and the second part is the predictive function $f(X)$ . Moreover, this function has been learned during learning process on the domain $D$ . To summarize, for each transfer learning task, there is a domain $D = \lbrace \chi , P(X) \rbrace $ and a task $T = \lbrace \gamma , f(X) \rbrace $ . Therefore, for transfer learning, it is a necessity to have at least two different domains and tasks. To this aim, consider $D_S$ as the source domain data where $D_S = \lbrace \chi _S,P(X_S) \rbrace $ . In the same way, $D_T$ is defined as the target domain data where $D_T = \lbrace \chi _T,P(X_T) \rbrace $ . Lastly, the source task is notated as $\gamma $0 , and the target task as $\gamma $1 .
INIT Method
Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable.
Multi-Task Learning (MULT)
In the MULT method, two datasets are simultaneously trained, and the weights are tuned based on the inputs which come from both datasets. The hyper-parameter $\lambda \in (0,1)$ is calculated based on a brute-force search or using general global search. This hyper parameter is used to calculate the final cost function which is computed from the combination of the cost function of the source dataset and the target datasets. The final cost function is shown in Eq. 1.
$Cost = \lambda \times Cost(S) + (1-\lambda ) \times Cos(T)$ (1)
where $S$ and $T$ are the source and target datasets respectively. One straightforward way to compute the cost function is randomly select the samples from both datasets with the probability of $\lambda $ and then compute the cost function for each sample. This parameter is used to randomly select a sample from a special dataset during the training.
The way that INIT and MULT method transfer knowledge varies drastically. In INIT, a initial point for optimization process is estimated instead of a random point selection. This initialization is very tricky for gradient based methods; however, sometimes this does not work in complex feature spaces of the problem. On the other hand, in MULT the samples from two dataset simultaneously affect the optimization process in which the source dataset behaves like a regularizer and potentially prevents the network from overfitting on the data.
ISS-MULT
The MULT method needs more research in NLP [6]. One possible improvement of this method is to automatically tune the hyperparameter of lambda during training in which a unique and proper lambda has been calculated for a special dataset. Although it seems that the estimation of a proper lambda is not a trivial work, there are some restrictions which help users to choose a proper lambda. First of all, the range of the lambda is predetermined and is between 0.85 and 1 because the network needs to see more data from the target dataset in each epoch. In addition, the lambda's behavior on mean average precision (MAP), mean reciprocal rank (MRR) and F1-score is very complex. In other words, there are many local optima in this multi-objective optimization problem. Moreover, there is not much difference between global optimum and the other local optima in this range.
Another way to improve this method could be to select the samples which are more relevant to the target dataset. Based on the importance of the similarity between the datasets for transfer learning in the NLP tasks, this paper proposes to use the most relevant samples from the source dataset to train on the target dataset. One way to find the most similar samples is finding the pair-wise distance between all samples of the development set of the target dataset and source dataset.
This idea encounters two main problems. First, in our experiments the source dataset is a huge dataset like SQuAD with more than 107K samples. Second, comparing two questiona and answer pairs using the cosine similarity will not be a fast task, especially when each word is represented in a vector of length 300.
To solve this problem, we propose using a clustering algorithm on the development set. The clustering algorithm used ihere is a hierarchical clustering algorithm. The cosine similarity is used as a criteria to cluster each question and answer. Therefore, these clusters are representative of the development set of the target dataset and the corresponding center for each cluster is representative of all the samples on that cluster. In the next step, the distance of each center is used to calculate the cosine similarity. Finally, the samples in the source dataset which are far from these centers are ignored. In other words, the outliers do not take part in transfer learning.
Experiments
For evaluation of INIT, MULT and ISS-MULT method, the bigram CNN introduced by Yu et al. [12] is used. This model consists of two 2D-convolution layers and each convolution layer is followed by a pooling layer. GoogleNews-word2vec is used as a pre-trained embedding layer to represent each word. Forty different filters of size $(2 \times embedding size (300))$ are used to create the feature spaces for each layer. Then, a logistic regression is used to predict the final result.
In this paper, two main question answering tasks such as answer selection and answer triggering have been examined. In the answer triggering task, there is not a guarantee to have the correct answer among the list of answers. However, in answer selection, there is at least one correct answer among the candidates. As a result, answer triggering is a more challenging task. To report the result for answer selection, MAP and MRR are used; however, the answer triggering task is evaluated by F1-score. The result for MULT Method is reported in Table. 1.
The results have shown that the MULT method could properly improve all metrics for both answer selection and answer triggering task. Moreover, this improvement is more significant in answer triggering. The result for InforboxQA dataset shows that this dataset deteriorate all metrics because the nature of this dataset is different from the other datasets. Moreover, the improvement on SQuAD is not significant because this dataset is a big dataset and already contains enough data to fine-tune the parameters of the deep network.
In other experiment, the INIT method is implemented. In the INIT method, the weights for the best results on the development set of the source dataset is used for initialization of the deep network. The results for the INIT method is listed in Table 2. Moreover, The results for ISS-MULT method is shown in Table 3.
The results indicate that the ISS-MULT method could improve all metrics on both the WikiQA and SelQA datasets. This improvement is more obvious in answer triggering task.
We also performe another experiment to examine INIT and MULT method for original WikiQA. The F1-score for this dataset is equal to 33.73; however, the average INIT result for both SQuAD and SelQA as initializers is 30.50. In addition, the average results for MULT and ISS-MULT are 31.76 and 32.65, respectively. The result on original WikiQA indicates that all three transfer learning methods not only do not improve the results but also hurt the F1-score. Therefore, SelQA and SQuAD could not estimate a proper initial point for gradient based optimization method. Moreover, these corpora could not refine the error surface of the original WikiQA dataset during optimization for MULT and ISS-MULT method.
These are because other datasets could not add new information to the original dataset or they apparently add some redundant information which are dissimilar to the target dataset. Although ISS-MULT tries to remove this effect and consequently the result is improved, this method is on top of MULT method, and the result is significantly based on the effectiveness of this method.
According to Section 3, SQuAD, SelQA and WikiQA are created based on the same policy from entire Wikipedia pages; however, the original WikiQA is just created based on the abstract of each page. Therefore, the original WikiQA dataset is different from WikiQA, SelQA and SQuAD most likely because INIT and MULT method do not work for original WikiQA based on available corpora.
All in all, indicated Table. 1, 2 and 3, the INIT method slightly generates better results for the answer selection task compared to MULT and ISS-MULT . Moreover, all of the three methods improve the MAP and MRR metrics compared to the base method. On the other hand, MULT and ISS-MULT produce much better results for answer triggering task than the INIT method. In this task, all three methods outperform the base method. Moreover, according to our experiments, using different policies to generate different datasets could intensely affect transfer learning for the answer triggering task.
Conclusions
In this paper, we presented a comprehensive experiment for two main methods in deep learning on five recent corpora for question answering and triggering tasks. A new method on the top of MULT named ISS-MULT is presented. The results show that transfer learning could generally improve the results and this improvement is larger in the answer triggering task. According to the results, we reach the conclusion that transfer learning works best on semantically related corpora but also works well on datasets created similarly.
Acknowledgement
The authors would like to thank Derek Onken, Massimiliano Lupo Pasini and Tomasz Jurczyk for their help in providing datasets and their guidance in this paper. | No |
134598831939a3ae20d177cec7033d133625a88e | 134598831939a3ae20d177cec7033d133625a88e_0 | Q: How do they transfer the model?
Text: INTRODUCTION
Transferring knowledge between related domains could help to improve a learner on a special domain. Moreover, gathering data from different related resources proves very time-consuming and expensive. Therefore, this necessitates the development of machine learning methods to extract knowledge from these resources with different properties and characteristics. Transfer learning is a field in which machine learning methods encounter this challenge.
Recently, deep neural networks outperform many machine learning methods in many real-world machine learning applications, especially NLP tasks. Consequently, deep learning methods have attracted much attention for question answering problems, and many methods based on deep learning algorithms have been proposed in recent years. Deep neural networks like other machine learning algorithms have some shortcomings. One of the main drawbacks of deep networks is their huge number of parameters which must be tuned during training. Consequently, deep neural networks require a huge amount of training samples to finely train and to prevent over-training. Also, using available and relevant data could improve their efficiency.
In addition, although deep learning methods are well-studied for question answering tasks in recent years, there is no comprehensive research for transfer learning through deep neural networks in the question answering field. Deep networks are shallow for NLP applications compared to the computer vision tasks. Consequently, many ways to perform transfer learning in computer vision area are inapplicable to NLP tasks.
Mou at el. [6] conducted a comprehensive study on the transferability of the deep networks in sentence classification. They concluded that the best time to transfer the knowledge is when the tasks are semantically related because words are more abstract entities and convey more semantics than pixels in images. Moreover, fine-tuning the weights is more effective than freezing the weights during learning of the target dataset. Finally, MULT and INIT are generally comparable, and Mou at el. did not observe any improvement by combining the two methods. They argue that the MULT method needs more in-depth analysis in future work. Because of these challenges and the question answering gap not covered specifically, this paper performs transfer learning in question answering problems. Also, this paper follows Mou at el.'s recommendations and improves MULT using Intelligent Sample Selection (ISS) process. In the ISS, the most relevant samples to the source data is selected to transfer the knowledge.
Related Works
Transfer learning has a long history, and many researchers try to utilize the knowledge from relevant datasets to improve the performance on the target dataset [9, 7]. Moreover, transfer learning has been well-studied for deep learning methods in computer vision. Similarity between datasets is a key factor for performing transfer learning. Based on the similarity between two domains, different methods could be used. Bengio et. al [11] examine the ability of the neural networks in transferring knowledge from different domain.
Transfer learning through deep networks is very restricted in NLP than Computer vision because the words naturally are high level entities, differing from the low level signals which create an image. However, there are some studies in the NLP realm which are restricted to sentence classification [6] and discourse relation classification [3]. Collobert and Weston [1] proposed a method in which they chose a random sample from both source and target domains with a $\lambda $ and $(1-\lambda )$ probability respectively, and subsequently the computed gradient is backpropagated through the network. Moreover, in Question answering, transfer learning through deep networks has not been well-studied because the networks are not deep like computer vision networks.
Datasets
Five different datasets (SQuAD, SelQA , WikiQA, WikiQA and InforbaxQA) are used for evaluation of the INIT, MULT and ISS-MULT methods. These datasets were proposed in recent years for the question answering problems. These datasets are produced differently. Therefore, they are may not be semanticaly related datasets, and this feature plays an important role in transfer learning in NLP tasks.
SQuAD: This dataset contains more than 107K crowdsourced questions on 536 Wikipedia article. SQuAD uses the entire candidate articles from Wikipedia to find the candidate answers. Also, the hit ratio for correct answers in SQuAD is about 35 percent [8].
SelQA: This dataset is contains 8K+ questions in which more than half of the questions are paraphrased from the first half. In this dataset, like SQuAd and WikiQA, the entire articles from Wikipedia is searched to answer the questions [2].
WikiQA: This dataset includes questions selected from the Bing search queries. The abstract of articles in Wikipedia is used to find the answer candidates and the hit rate is about 36 percent. This dataset contains 20K+ question and answer. WikiQA differes from SQuAD and SelQA in its use of the abstract page of each article to find the candidate answers [10].
WikiQA: This dataset contains 9K+ question and answer pairs from original WikiQA. In WikiQA, unlike wikiQA, entire WikiPedia pages are search for candidate answers. Moreover, the hit rate (13%) is much lower than the original WikiQA; therefore, this dataset is much more challenging than the WikiQA [13].
InfoboxQA: This dataset contains more than 15K questions extracted from 150 articles in Wikipedia. The answers lack complete sentence structure. Therefore, the nature of this dataset is differs from the other datasets [5].
Methods
sd First, before mathematically discussing transfer learning, presenting a proper definition of this term is crucial. Generally, transfer learning is a process in which the knowledge from a special domain is transferred to a new domain with different properties of feature space and data distribution. This knowledge may be captured from learning a model which has already been trained on source data or through some mathematical mapping between two different domains.
To mathematically define transfer learning we abide by the following notation [7]. Suppose each domain of D consists of two main parts:
For this domain, there exists a task named $T$ which also includes two main parts. The first part is the label space that refers to the label of the data $\gamma $ , and the second part is the predictive function $f(X)$ . Moreover, this function has been learned during learning process on the domain $D$ . To summarize, for each transfer learning task, there is a domain $D = \lbrace \chi , P(X) \rbrace $ and a task $T = \lbrace \gamma , f(X) \rbrace $ . Therefore, for transfer learning, it is a necessity to have at least two different domains and tasks. To this aim, consider $D_S$ as the source domain data where $D_S = \lbrace \chi _S,P(X_S) \rbrace $ . In the same way, $D_T$ is defined as the target domain data where $D_T = \lbrace \chi _T,P(X_T) \rbrace $ . Lastly, the source task is notated as $\gamma $0 , and the target task as $\gamma $1 .
INIT Method
Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable.
Multi-Task Learning (MULT)
In the MULT method, two datasets are simultaneously trained, and the weights are tuned based on the inputs which come from both datasets. The hyper-parameter $\lambda \in (0,1)$ is calculated based on a brute-force search or using general global search. This hyper parameter is used to calculate the final cost function which is computed from the combination of the cost function of the source dataset and the target datasets. The final cost function is shown in Eq. 1.
$Cost = \lambda \times Cost(S) + (1-\lambda ) \times Cos(T)$ (1)
where $S$ and $T$ are the source and target datasets respectively. One straightforward way to compute the cost function is randomly select the samples from both datasets with the probability of $\lambda $ and then compute the cost function for each sample. This parameter is used to randomly select a sample from a special dataset during the training.
The way that INIT and MULT method transfer knowledge varies drastically. In INIT, a initial point for optimization process is estimated instead of a random point selection. This initialization is very tricky for gradient based methods; however, sometimes this does not work in complex feature spaces of the problem. On the other hand, in MULT the samples from two dataset simultaneously affect the optimization process in which the source dataset behaves like a regularizer and potentially prevents the network from overfitting on the data.
ISS-MULT
The MULT method needs more research in NLP [6]. One possible improvement of this method is to automatically tune the hyperparameter of lambda during training in which a unique and proper lambda has been calculated for a special dataset. Although it seems that the estimation of a proper lambda is not a trivial work, there are some restrictions which help users to choose a proper lambda. First of all, the range of the lambda is predetermined and is between 0.85 and 1 because the network needs to see more data from the target dataset in each epoch. In addition, the lambda's behavior on mean average precision (MAP), mean reciprocal rank (MRR) and F1-score is very complex. In other words, there are many local optima in this multi-objective optimization problem. Moreover, there is not much difference between global optimum and the other local optima in this range.
Another way to improve this method could be to select the samples which are more relevant to the target dataset. Based on the importance of the similarity between the datasets for transfer learning in the NLP tasks, this paper proposes to use the most relevant samples from the source dataset to train on the target dataset. One way to find the most similar samples is finding the pair-wise distance between all samples of the development set of the target dataset and source dataset.
This idea encounters two main problems. First, in our experiments the source dataset is a huge dataset like SQuAD with more than 107K samples. Second, comparing two questiona and answer pairs using the cosine similarity will not be a fast task, especially when each word is represented in a vector of length 300.
To solve this problem, we propose using a clustering algorithm on the development set. The clustering algorithm used ihere is a hierarchical clustering algorithm. The cosine similarity is used as a criteria to cluster each question and answer. Therefore, these clusters are representative of the development set of the target dataset and the corresponding center for each cluster is representative of all the samples on that cluster. In the next step, the distance of each center is used to calculate the cosine similarity. Finally, the samples in the source dataset which are far from these centers are ignored. In other words, the outliers do not take part in transfer learning.
Experiments
For evaluation of INIT, MULT and ISS-MULT method, the bigram CNN introduced by Yu et al. [12] is used. This model consists of two 2D-convolution layers and each convolution layer is followed by a pooling layer. GoogleNews-word2vec is used as a pre-trained embedding layer to represent each word. Forty different filters of size $(2 \times embedding size (300))$ are used to create the feature spaces for each layer. Then, a logistic regression is used to predict the final result.
In this paper, two main question answering tasks such as answer selection and answer triggering have been examined. In the answer triggering task, there is not a guarantee to have the correct answer among the list of answers. However, in answer selection, there is at least one correct answer among the candidates. As a result, answer triggering is a more challenging task. To report the result for answer selection, MAP and MRR are used; however, the answer triggering task is evaluated by F1-score. The result for MULT Method is reported in Table. 1.
The results have shown that the MULT method could properly improve all metrics for both answer selection and answer triggering task. Moreover, this improvement is more significant in answer triggering. The result for InforboxQA dataset shows that this dataset deteriorate all metrics because the nature of this dataset is different from the other datasets. Moreover, the improvement on SQuAD is not significant because this dataset is a big dataset and already contains enough data to fine-tune the parameters of the deep network.
In other experiment, the INIT method is implemented. In the INIT method, the weights for the best results on the development set of the source dataset is used for initialization of the deep network. The results for the INIT method is listed in Table 2. Moreover, The results for ISS-MULT method is shown in Table 3.
The results indicate that the ISS-MULT method could improve all metrics on both the WikiQA and SelQA datasets. This improvement is more obvious in answer triggering task.
We also performe another experiment to examine INIT and MULT method for original WikiQA. The F1-score for this dataset is equal to 33.73; however, the average INIT result for both SQuAD and SelQA as initializers is 30.50. In addition, the average results for MULT and ISS-MULT are 31.76 and 32.65, respectively. The result on original WikiQA indicates that all three transfer learning methods not only do not improve the results but also hurt the F1-score. Therefore, SelQA and SQuAD could not estimate a proper initial point for gradient based optimization method. Moreover, these corpora could not refine the error surface of the original WikiQA dataset during optimization for MULT and ISS-MULT method.
These are because other datasets could not add new information to the original dataset or they apparently add some redundant information which are dissimilar to the target dataset. Although ISS-MULT tries to remove this effect and consequently the result is improved, this method is on top of MULT method, and the result is significantly based on the effectiveness of this method.
According to Section 3, SQuAD, SelQA and WikiQA are created based on the same policy from entire Wikipedia pages; however, the original WikiQA is just created based on the abstract of each page. Therefore, the original WikiQA dataset is different from WikiQA, SelQA and SQuAD most likely because INIT and MULT method do not work for original WikiQA based on available corpora.
All in all, indicated Table. 1, 2 and 3, the INIT method slightly generates better results for the answer selection task compared to MULT and ISS-MULT . Moreover, all of the three methods improve the MAP and MRR metrics compared to the base method. On the other hand, MULT and ISS-MULT produce much better results for answer triggering task than the INIT method. In this task, all three methods outperform the base method. Moreover, according to our experiments, using different policies to generate different datasets could intensely affect transfer learning for the answer triggering task.
Conclusions
In this paper, we presented a comprehensive experiment for two main methods in deep learning on five recent corpora for question answering and triggering tasks. A new method on the top of MULT named ISS-MULT is presented. The results show that transfer learning could generally improve the results and this improvement is larger in the answer triggering task. According to the results, we reach the conclusion that transfer learning works best on semantically related corpora but also works well on datasets created similarly.
Acknowledgement
The authors would like to thank Derek Onken, Massimiliano Lupo Pasini and Tomasz Jurczyk for their help in providing datasets and their guidance in this paper. | In the MULT method, two datasets are simultaneously trained, and the weights are tuned based on the inputs which come from both datasets. The hyper-parameter $\lambda \in (0,1)$ is calculated based on a brute-force search or using general global search. This hyper parameter is used to calculate the final cost function which is computed from the combination of the cost function of the source dataset and the target datasets. , this paper proposes to use the most relevant samples from the source dataset to train on the target dataset. One way to find the most similar samples is finding the pair-wise distance between all samples of the development set of the target dataset and source dataset., we propose using a clustering algorithm on the development set. The clustering algorithm used ihere is a hierarchical clustering algorithm. The cosine similarity is used as a criteria to cluster each question and answer. Therefore, these clusters are representative of the development set of the target dataset and the corresponding center for each cluster is representative of all the samples on that cluster. In the next step, the distance of each center is used to calculate the cosine similarity. Finally, the samples in the source dataset which are far from these centers are ignored. In other words, the outliers do not take part in transfer learning. |
4bae74eb707ed71d5f438ddb3d9c2192ac490f66 | 4bae74eb707ed71d5f438ddb3d9c2192ac490f66_0 | Q: Will these findings be robust through different datasets and different question answering algorithms?
Text: Introduction
Question answering (QA) systems can provide most value for users by showing them a fine-grained short answer (answer span) in a context that supports the answer (paragraph in a document). However, fine-grained short answer annotations for question answering are costly to obtain, whereas non-expert annotators can annotate coarse-grained passages or documents faster and with higher accuracy. In addition, coarse-grained annotations are often freely available from community forums such as Quora. Therefore, methods that can learn to select short answers based on more abundant coarsely annotated paragraph-level data can potentially bring significant improvements. As an example of the two types of annotation, Figure 1 shows on the left a question with corresponding short answer annotation (underlined short answer) in a document, and on the right a question with a document annotated at the coarse-grained paragraph relevance level. In this work we study methods for learning short answer models from small amounts of data annotated at the short answer level and larger amounts of data annotated at the paragraph level. min-seo-hajishirzi:2017:Short recently studied a related problem of transferring knowledge from a fine-grained QA model to a coarse-grained model via multi-task learning and showed that finely annotated data can help improve performance on the coarse-grained task. We investigate the opposite and arguably much more challenging direction: improving fine-grained models using coarse-grained data.
We explore alternatives to the standard approach of multi-task learning via representation sharing BIBREF0 by leveraging the known correspondences between the coarse and fine-grained tasks. In the standard representation sharing approach, the dependencies between the fine-grained and coarse-grained tasks are modeled implicitly. The model must learn representations that are useful for all tasks without knowing how they relate to each other. However, in the scenario of learning from both fine and coarse supervision, the dependencies between the tasks can be modeled explicitly. For example, if a paragraph answers a question, we know that there exists a fine-grained answer span in the paragraph, providing strong constraints on the possible fine-grained answers for the question.
We evaluate a multi-task approach and three algorithms that explicitly model the task dependencies. We perform experiments on document-level variants of the SQuAD dataset BIBREF1 . The contributions for our papers are:
Task Definitions
The fine-grained short question answering task asks to select an answer span in a document containing multiple paragraphs. In the left example in Figure 1, the short answer to the question What was Nikola Tesla's ethnicity? is the phrase Serbian in the first paragraph in the document.
The coarse-grained labels indicate the relevance of document paragraphs. In the right example in Figure 1, the labels indicate whether or not the paragraphs in a given document contain the answers for the given question What was Martin Luther's nationality? without specifying the answer spans.
The goal of our paper is to design methods to learn from both fine-grained and coarse-grained labeled data, to improve systems for fine-grained QA.
Formal Definition
We define the fine-grained task of interest $T_y$ as predicting outputs $y$ from a set of possible outputs ${\cal {Y}}(x)$ given inputs $x$ . We say that a task $T_z$ to predict outputs $z$ given inputs $x$ is a coarse-grained counterpart of $T_y$ , iff each coarse label $z$ determines a sub-set of possible labels ${\cal {Y}}(z,x) \subset {\cal {Y}}(x)$ , and each fine label $y$0 has a deterministically corresponding single coarse label $y$1 . We refer to the fine-grained and coarse-grained training data as $y$2 and $y$3 respectively.
For our application of document-level QA, $T_y$ is the task of selecting a short answer span from the document, and $T_z$ is the task of selecting a paragraph from the document. The input $x$ to both tasks is a question-document pair. Each document is a sequence of $M$ paragraphs, and each paragraph with index $p$ (where $1 \le p \le M$ ) is a sequence of $n_p$ tokens. The set of possible outputs for the fine-grained task $T_y$ is the set of all phrases (contiguous substring spans) in all document paragraphs. The possible outputs for the coarse task $T_z$ are the paragraph indices $p$ . It is clear that each paragraph output $T_z$0 determines a subset of possible outputs $T_z$1 (the phrases in the paragraph).
Fine-grained annotation is provided as $y=(a_{\mathit {p}}, a_{\mathit {start}}, a_{\mathit {end}})$ , where $a_{\mathit {p}}$ indicates the index of the paragraph containing the answer, and $a_{\mathit {start}}, a_{\mathit {end}}$ respectively indicate the start and end position of the short answer.
Paragraph-level supervision is provided as $z=(a_{\mathit {p}}, \_,\_)$ , only indicating the paragraph index of the answer, without the start and end token indices of the answer span. The coarse labels $z$ in this case limit the set of possible labels $y$ for $x$ to:
$${\cal {Y}}(z,x) = \lbrace (a_{\mathit {p}}, a^{\prime }_{\mathit {start}}, a^{\prime }_{\mathit {end}})~|~1 \le a^{\prime }_{\mathit {start}} \le a^{\prime }_{\mathit {end}} \le n_p\rbrace .$$ (Eq. 8)
In the presence of the coarsely annotated $D_z$ when the task of interest is $T_y$ , the research question becomes: how can we train a model to use both $D_z$ and $D_y$ in the most effective way?
Multi-task learning for MixedQA
The multi-task learning approach defines models for $T_y$ and $T_z$ that share some of their parameters. The data for task $T_z$ helps improve the model for $T_y$ via these shared parameters (representations). Multi-task learning with representation sharing is widely used with auxiliary tasks from reconstruction of unlabeled data BIBREF0 to machine translation and syntactic parsing BIBREF3 , and can be used with any task $T_z$ which is potentially related to the main task of interest $T_y$ .
Let $\theta = \begin{bmatrix} \theta _y & \theta _z & \theta _{s} \end{bmatrix}$ be the set of parameters in the two models. $\theta _y$ denotes parameters exclusive to the fine-grained task $T_y$ , $\theta _z$ denotes parameters exclusive to the coarse-grained task $T_z$ , and $\theta _s$ denotes the shared parameters across the two tasks.
Then the multi-task learning objective is to minimize $L(\theta , D_y,D_z)$ :
$$\begin{split} & -\sum _{(x,y) \in D_{y}} \log P(y|x, \theta _s, \theta _y) \\ -~~~\alpha _z &\sum _{(x,z) \in D_{z}} \log P(z|x , \theta _s, \theta _z) \end{split}$$ (Eq. 10)
Here $\alpha _z$ is a trade-off hyper-parameter to balance the objectives of the fine and coarse models.
We apply multi-task learning to question answering by reusing the architecture from min-seo-hajishirzi:2017:Short to define models for both fine-grained short answer selection $T_y$ and coarse-grained paragraph selection $T_z$ . After the two models are trained, only the model for the fine-grained task $T_y$ is used at test time to make predictions for the task of interest.
The shared component with parameters $\theta _s$ maps the sequence of tokens in the document $d$ to continuous representations contextualized with respect to the question $q$ and the tokens in the paragraph $p$ . We denote these representations as $\mathbf {h}(x,\theta _s) = (\mathbf {h}^1(\theta _s),\mathbf {h}^2(\theta _s),\ldots ,\mathbf {h}^{M}(\theta _s)),$
where we omit the dependence on $x$ for simplicity. Each contextualized paragraph token representation is a sequence of contextualized token representations, where $\mathbf {h}^p(\theta _s) = {h_1}^p(\theta _s),\ldots ,{{h}_{n_p}}^p(\theta _s).$
Fine-grained answer selection model
The fine-grained answer selection model $P(y|x,\theta _s,\theta _y)$ uses the same hidden representations $\mathbf {h}(x,\theta _s)$ and makes predictions assuming that the start and end positions of the answer are independent, as in BiDAF BIBREF4 . The output parameters $\theta _y$ contain separate weights for predicting starts and ends of spans: $\theta _y = \begin{bmatrix} \theta _y^{\mathit {start}} & \theta _y^{\mathit {end}} \end{bmatrix}$
The probability of answer start $a_{\mathit {start}}$ in paragraph $a_{\mathit {p}}$ is proportional to $\exp (h(a_{\mathit {start}}, a_{\mathit {p}}, \theta _s)\cdot \theta _y^{\mathit {start}})$ , where $h(a_{\mathit {start}}, a_{\mathit {p}}, \theta _s)$ is the hidden representation of the token $a_{\mathit {start}}$ in paragraph $a_{\mathit {p}}$ , given shared parameters $\theta _s$ . The probability for end of answer positions is defined analogously.
Paragraph answer selection model
The paragraph selection model for task $T_{z}$ uses the same hidden representations $\mathbf {h}(x,\theta _s)$ for the tokens in the document. Because this model assigns scores at the paragraph granularity (as opposed to token granularity), we apply a pooling operation to the token representations to derive single vector paragraph representations. As in BIBREF2 , we use max-pooling over token representations and arrive at $h^p(\theta _s)=\mbox{max}({h_1}^p(\theta _s),\ldots ,{{h}_{n_p}}^p(\theta _s))$
Using the coarse-grained task-specific parameters $\theta _z$ , we define the probability distribution over paragraphs as: $P(a_p = p | x, \theta _s, \theta _z) = \frac{\exp (h^p(\theta _s) \cdot \theta _z)}{\sum _{p^{\prime }}{\exp (h^{p^{\prime }}(\theta _s) \cdot \theta _z)}} $
Latent Variable Methods for MixedQA
We study two types of latent variable methods that capture the dependencies between the fine and coarse tasks explicitly. Unlike the multitask learning algorithm described above, both eliminate the need for parameters specifically for the coarse task $\theta _z$ , since we treat the fine labels as a latent variable in the coarsely annotated data.
The dependencies between the coarse and fine supervision labels can be captured by the following consistency constraints implied by our task definition: $ \begin{split} P(y, z|x) = 0, & \forall y \notin \mathcal {Y}(z, x), \text{ and } \\ P(z|y,x) = 1, & \forall y \in \mathcal {Y}(z, x). \end{split} $
Maximum Marginal Likelihood
For the task of document-level QA, these constraints ensure that a paragraph is labeled as positive iff there exists a positive answer text span inside the paragraph.
The idea of the maximum marginal likelihood method is to define a distribution over coarse labels using the fine-grained model's distribution over fine labels. By expanding the above equations expressing the task dependencies,
$$P(z|x, \theta ) = \sum _{y \in \mathcal {Y}(x)} P(y,z|x, \theta ) = \hspace{-6.0pt}\sum _{y \in \mathcal {Y}(z, x)}\hspace{-6.0pt}P(y|x, \theta )$$ (Eq. 17)
This equation simply says that the probability that a given paragraph $z$ is relevant is the sum of the probabilities of all possible short answer spans within the paragraph.
The objective function for the coarsely labeled data $D_{z}$ can be expressed as a function of the parameters of the fine-grained task model as:
$$\begin{split} -\sum _{(x,z) \in D_{z}} \log \sum _{y \in \mathcal {Y}(z, x)} P(y|x, \theta _s, \theta _y) \end{split}$$ (Eq. 18)
The fine-grained task loss and the coarse-grained task loss are interpolated with a parameter $\alpha _z$ , as for the multi-task approach.
Posterior Distillation
In addition to direct maximization of the marginal likelihood for latent variable models BIBREF5 , prior work has explored EM-based optimization BIBREF6 including generalized EM BIBREF7 , which is applicable to neural models BIBREF8 .
We present a class of optimization algorithms which we term Posterior Distillation, which includes generalized EM for our problem as a special case, and has close connections to knowledge distillation BIBREF9 , BIBREF10 .
We begin by describing an online generalized EM optimization algorithm for the latent variable model from equation ( 17 ) and show how it can be generalized to multiple variants inspired by knowledge distillation with priviledged information BIBREF11 . We refer to the more general approach as Posterior Distillation.
[t] Posterior Distillation Algorithm. [1] not converge Sample a mini-batch $(x_1,y) \sim D_y$ and $(x_2,z) \sim D_z$ Calculate predicted distribution for current $\theta ^{old}$ $P(\hat{y}|x_2, \theta ^{old})$ Correct and renormalize the predicted distribution using the coarse supervision signal by setting $q(\hat{y}|x_2) \propto {\left\lbrace \begin{array}{ll} P(\hat{y}|x_2, \theta ^{old}), \hat{y} \in \mathcal {Y}(z)\\ 0, \hat{y} \notin \mathcal {Y}(z) \end{array}\right.}$
Update $\theta $ by taking a step to minimize - $\log P(y|x_1, \theta )$ + $\alpha _z$ distance( $P(y|x,\theta ), q$ ).
In EM-like algorithms one uses current model parameters $\theta ^{old}$ to make predictions and complete the latent variables in input examples, and then updates the model parameters to maximize the log-likelihood of the completed data. We formalize this procedure for our case below.
Given a coarse example with input $x$ and coarse label $z$ , we first compute the posterior distribution over the fine labels $y$ given $z$ and the current set of parameters $\theta ^{old}$ :
$$P(y | x, z, \theta ^{old}) &= \frac{[[\hbox{$y \in {\cal Y}(x)$}]] \times P(y | x, \theta ^{old})}{\displaystyle \sum _{y \in {\mathcal {Y}}(z, x)} P(y | x, \theta ^{old})}$$ (Eq. 20)
where $[[\cdot ]]$ is the indicator function. In EM, we update the parameters $\theta $ to minimize the negative expected log-likelihood of the fine labels with respect to the posterior distribution: $ Q(\theta , \theta ^{old}) &= -\mathop {\mathbb {E}}_{P(y | x, z, \theta ^{old})} \log P(y | x, \theta )\\ &= -\sum _{y \in {\cal Y}(x)} P(y | x, z, \theta ^{old}) \log P(y | x, \theta ) $
By taking a gradient step towards minimizing $Q(\theta , \theta ^{old})$ with respect to $\theta $ , we arrive at a form of generalized EM BIBREF7 . If the loss $Q$ is computed over a mini-batch, this is a form of online EM.
We propose a variant of this EM algorithm that is inspired by knowledge distillation methods BIBREF9 , BIBREF10 , where a student model learns to minimize the distance between its predictions and a teacher model's predictions. In our case, we can consider the posterior distribution $P(y | x, z, \theta ^{old})$ to be the teacher, and the model distribution $P(y | x, \theta )$ to be the student. Here the teacher distribution is directly derived from the model (student) distribution $P(y | x, \theta ^{old})$ by integrating the information from the coarse label $z$ . The coarse labels can be seen as privileged information BIBREF11 which the student does not condition on directly.
Let us define $Q(\theta , \theta ^{old})$ in a more general form, where it is a general distance function rather than cross-entropy: $ Q(\theta , \theta ^{old}) = \textsc {distance}(P(y | x, z, \theta ^{old}), P(y | x, \theta )) $
We refer to the class of learning objectives in this form as posterior distillation. When the distance function is cross entropy, posterior distillation is equivalent to EM. As is common in distillation techniques BIBREF12 , we can apply other distance functions, such as the squared error. $ Q(\theta , \theta ^{old}) = \sum _{y \in {\cal Y}(x)} \left\Vert P(y | x, z, \theta ^{old}) - P(y | x, \theta ) \right\Vert _2^2 $
In our experiments, we found that squared error outperforms cross entropy consistently.
This algorithm also has a close connection to Posterior Regularization BIBREF13 . The coarse supervision labels $z$ can be integrated using linear expectation constraints on the model posteriors $P(y|x,\theta )$ , and a KL-projection onto the constrained space can be done exactly in closed form using equation 20 . Thus the PR approach in this case is equivalent to posterior distillation with cross-entropy and to EM. Note that the posterior distillation method is more general because it allows additional distance functions.
The combined loss function using both finely and coarsely labeled data to be minimized is:
$$\begin{split} & \sum _{(x,y) \in D_{y}} -\log P(y|x, \theta _s) \\ +~~~\alpha _z &\sum _{(x,z) \in D_{z}} Q(\theta ,\theta ^{old},x,z) \end{split}$$ (Eq. 21)
Figure 2 presents an illustration of the multi-task and posterior distillation approaches for learning from both finely and coarsely labeled data. Algorithm 1 lists the steps of optimization. Each iteration of the loop samples mini-batches from the union of finely and coarsely labeled data and takes a step to minimize the combined loss.
Experiments
We present experiments on question answering using the multi-task and latent variable methods introduced in the prior section.
Mixed supervision data
We focus on the document-level variant of the SQuAD dataset BIBREF1 , as defined by docqa, where given a question and document, the task is to determine the relevant passage and answer span within the passage $(a_p, a_{\mathit {start}}, a_{\mathit {end}})$ . We define finely annotated subsets $D_{y}$ with two different sizes: 5% and 20% of the original dataset. These are paired with non-overlapping subsets of coarsely annotated data $D_{z}$ with sizes 20% and 70% of the original training set, respectively. Both of these settings represent the regime where coarsely annotated data is available in higher volume, because such data can be obtained faster and at lower cost. For both dataset settings, we derive $D_{y}$ and $D_{z}$ from the SQuAD training set, by allocating whole documents with all their corresponding questions to a given subset. In both settings, we also reserve a finely annotated non-overlapping set $\mbox{Dev}_{y}$ , which is used to select optimal hyperparameters for each method. We report final performance metrics on $\mbox{Test}_{y}$ , which is the unseen SQuAD development set.
QA model
We build on the state-of-the-art publicly available question answering system by docqa. The system extends BiDAF BIBREF4 with self-attention and performs well on document-level QA. We reuse all hyperparameters from docqa with the exception of number of paragraphs sampled in training: 8 instead of 4. Using more negative examples was important when learning from both fine and coarse annotations. The model uses character embeddings with dimension 50, pre-trained Glove embeddings, and hidden units for bi-directional GRU encoders with size 100. Adadelta is used for optimization for all methods. We tune two hyperparameters separately for each condition based on the held-out set: (1) $\alpha \in \lbrace .01, .1, .5, 1, 5, 10, 100 \rbrace $ , the weight of the coarse loss, and (2) the number of steps for early stopping. The training time for all methods using both coarse and fine supervision is comparable. We use Adadelta for optimization for all methods.
Results
We report results evaluating the impact of using coarsely annotated data in the two dataset conditions in Figure 3 . There are two groups of rows corresponding to the two data sizes: in the smaller setting, only 5% of the original fine-grained data is used, and in the medium setting, 20% of the fine-grained data is used. The first row in each group indicates the performance when using only finely labeled fully supervised data. The column Fine-F1 indicates the performance metric of interest – the test set performance on document-level short answer selection. The next rows indicate the performance of a multi-task and the best latent variable method when using the finely labeled data plus the additional coarsely annotated datasets. The ceiling performance in each group shows the oracle achieved by a model also looking at the gold fine-grained labels for the data that the rest of the models see with only coarse paragraph-level annotation. The column Gain indicates the relative error reduction of each model compared to the supervised-only baseline with respect to the ceiling upper bound. As we can see all models benefit from coarsely labeled data and achieve at least 20% error reduction. The best latent variable method (Posterior Distillation with squared error distance) significantly outperforms the multi-task approach, achieving up to 41% relative gain.
Figure 4 compares the performance of the three different optimization methods using latent fine-grained answer variables for coarsely annotated data. Here we inlcude an additional last column reporting performance on an easier task where the correct answer paragraph is given at test time, and the model only needs to pick out the short answer within the given paragraph. We include this measurement to observe whether models are improving just by picking out relevant paragraphs or also by selecting the finer-grained short answers within them. Since EM and MML are known to optimize the same function, it is unsurprising that MML and PD with cross-entropy (equivalent to EM) perform similarly. For posterior distillation, we observe substantially better performance with the squared error as the distance function, particularly in the second setting, where there is more coarsely annotated data.
To gain more insight into the behavior of the different methods using coarsely annotated data, we measured properties of the predictive distributions $P(y|x,\theta )$ for the three methods on the dataset used with coarse labels in training $D_{70coarse}$ . The results are shown in Figure 5 . For models MTL, MML, PD( $xent$ ), and PD( $err^2$ ), trained on finely labeled $D_{20fine}$ and coarsely labeled $D_{70coarse}$ , we study the predictive distributions $P(y|x,\theta ^M)$ for the four model types $M$ . We measure the properties of these distributions on the dataset $D_{70fine}$ , which is the finely labeled version of the same (question, document)-pairs $D_{70}$ as $D_{70coarse}$0 . Note that none of the models see the fine-grained short answer labels for $D_{70coarse}$1 in training since they only observe paragraph-level relevance annotations. Nevertheless, the models can assign a probability distribution over fine-grained labels in the documents, and we can measure the peakiness (entropy) of this distribution, as well as see how it compares to the gold hidden label distribution.
The first column in the table reports the entropies of the predictive distributions for the four trained models (using the fine task model for the multi-task method MTL). We can see that multi-task method MTL and PD( $xent$ ) (which is equivalent to generalized EM) have lowest entropy, and are most confident about their short answer predictions. MML marginalizes over possible fine answers, resulting in flatter predictive distributions which spread mass among multiple plausible answer positions. The best-performing method PD( $err^2$ ) is somewhere in between and maintains more uncertainty. The next two columns in the Table look at the cross-entropy ( $xent$ ) and squared error ( $err^2$ ) distances of the predictive distributions with respect to the gold one. The gold label distribution has mass of one on a single point indicating the correct fine answer positions. Note that none of the models have seen this gold distribution during training and have thus not been trained to minimize these distances (the PD latent variable models are trained to minimize distance with respect to projected model distributions given coarse passage labels $z$ ). We can see that the predictive distribution of the best method PD( $err^2$ ) is closest to the gold labels. The maximum marginal likelihood method MML comes second in approaching the gold distribution. The multi-task approach lags behind others in distance to the fine-grained gold labels, but comes first in the measurement in the last column, Passage-MRR. That column indicates the mean reciprocal rank of the correct gold passage according to the model. Here passages are ranked by the score of the highest-scoring short answer span within the passage. This measurement indicates that the multi-task model is able to learn to rank passages correctly from the coarse-grained passage-level annotation, but has a harder time to transfer this improvement to the task of picking fine-grained short answers within the passages.
Text-based Question Answering
In span-based reading comprehension, a system must be able to extract a plausible text-span answer for a given question from a context document or paragraph BIBREF1 , BIBREF14 , BIBREF15 . Most work has focused on selecting short answers given relevant paragraphs, but datasets and works considering the more realistic task of selection from full documents are starting to appear BIBREF14 .
Sentence selection or paragraph selection datasets test whether a system can correctly rank texts that are relevant for answering a question higher than texts that do not. Wang2007EMNLP constructed the QASent dataset based on questions from TREC 8-13 QA tracks. WikiQA BIBREF16 associates questions from Bing search query log with all the sentences in the Wikipedia summary paragraph which is then labeled by crowd workers. Most state-of-the-art models for both types of tasks make use of neural network modules to construct and compare representations for a question and the possible answers. We build on a near state-of-the-art baseline model and evaluate on a document-level short question answering task.
Data Augmentation and Multi-Task Learning in QA
There have been several works addressing the paucity of annotated data for QA. Data noisily annotated with short answer spans has been generated automatically through distant supervision and shown to be useful BIBREF14 . Unlabeled text and data augmentation through machine translation have been used to improve model quality BIBREF17 , BIBREF18 , BIBREF19 . min-seo-hajishirzi:2017:Short used short-answer annotations in SQuAD BIBREF1 to improve paragraph-level question answering for WikiQA BIBREF16 . To the best of our knowledge, there has been no prior work using QA data annotated at the paragraph level to improve models for short question answering. | Yes |
c30c3e0f8450b1c914d29f41c17a22764fa078e0 | c30c3e0f8450b1c914d29f41c17a22764fa078e0_0 | Q: What is the underlying question answering algorithm?
Text: Introduction
Question answering (QA) systems can provide most value for users by showing them a fine-grained short answer (answer span) in a context that supports the answer (paragraph in a document). However, fine-grained short answer annotations for question answering are costly to obtain, whereas non-expert annotators can annotate coarse-grained passages or documents faster and with higher accuracy. In addition, coarse-grained annotations are often freely available from community forums such as Quora. Therefore, methods that can learn to select short answers based on more abundant coarsely annotated paragraph-level data can potentially bring significant improvements. As an example of the two types of annotation, Figure 1 shows on the left a question with corresponding short answer annotation (underlined short answer) in a document, and on the right a question with a document annotated at the coarse-grained paragraph relevance level. In this work we study methods for learning short answer models from small amounts of data annotated at the short answer level and larger amounts of data annotated at the paragraph level. min-seo-hajishirzi:2017:Short recently studied a related problem of transferring knowledge from a fine-grained QA model to a coarse-grained model via multi-task learning and showed that finely annotated data can help improve performance on the coarse-grained task. We investigate the opposite and arguably much more challenging direction: improving fine-grained models using coarse-grained data.
We explore alternatives to the standard approach of multi-task learning via representation sharing BIBREF0 by leveraging the known correspondences between the coarse and fine-grained tasks. In the standard representation sharing approach, the dependencies between the fine-grained and coarse-grained tasks are modeled implicitly. The model must learn representations that are useful for all tasks without knowing how they relate to each other. However, in the scenario of learning from both fine and coarse supervision, the dependencies between the tasks can be modeled explicitly. For example, if a paragraph answers a question, we know that there exists a fine-grained answer span in the paragraph, providing strong constraints on the possible fine-grained answers for the question.
We evaluate a multi-task approach and three algorithms that explicitly model the task dependencies. We perform experiments on document-level variants of the SQuAD dataset BIBREF1 . The contributions for our papers are:
Task Definitions
The fine-grained short question answering task asks to select an answer span in a document containing multiple paragraphs. In the left example in Figure 1, the short answer to the question What was Nikola Tesla's ethnicity? is the phrase Serbian in the first paragraph in the document.
The coarse-grained labels indicate the relevance of document paragraphs. In the right example in Figure 1, the labels indicate whether or not the paragraphs in a given document contain the answers for the given question What was Martin Luther's nationality? without specifying the answer spans.
The goal of our paper is to design methods to learn from both fine-grained and coarse-grained labeled data, to improve systems for fine-grained QA.
Formal Definition
We define the fine-grained task of interest $T_y$ as predicting outputs $y$ from a set of possible outputs ${\cal {Y}}(x)$ given inputs $x$ . We say that a task $T_z$ to predict outputs $z$ given inputs $x$ is a coarse-grained counterpart of $T_y$ , iff each coarse label $z$ determines a sub-set of possible labels ${\cal {Y}}(z,x) \subset {\cal {Y}}(x)$ , and each fine label $y$0 has a deterministically corresponding single coarse label $y$1 . We refer to the fine-grained and coarse-grained training data as $y$2 and $y$3 respectively.
For our application of document-level QA, $T_y$ is the task of selecting a short answer span from the document, and $T_z$ is the task of selecting a paragraph from the document. The input $x$ to both tasks is a question-document pair. Each document is a sequence of $M$ paragraphs, and each paragraph with index $p$ (where $1 \le p \le M$ ) is a sequence of $n_p$ tokens. The set of possible outputs for the fine-grained task $T_y$ is the set of all phrases (contiguous substring spans) in all document paragraphs. The possible outputs for the coarse task $T_z$ are the paragraph indices $p$ . It is clear that each paragraph output $T_z$0 determines a subset of possible outputs $T_z$1 (the phrases in the paragraph).
Fine-grained annotation is provided as $y=(a_{\mathit {p}}, a_{\mathit {start}}, a_{\mathit {end}})$ , where $a_{\mathit {p}}$ indicates the index of the paragraph containing the answer, and $a_{\mathit {start}}, a_{\mathit {end}}$ respectively indicate the start and end position of the short answer.
Paragraph-level supervision is provided as $z=(a_{\mathit {p}}, \_,\_)$ , only indicating the paragraph index of the answer, without the start and end token indices of the answer span. The coarse labels $z$ in this case limit the set of possible labels $y$ for $x$ to:
$${\cal {Y}}(z,x) = \lbrace (a_{\mathit {p}}, a^{\prime }_{\mathit {start}}, a^{\prime }_{\mathit {end}})~|~1 \le a^{\prime }_{\mathit {start}} \le a^{\prime }_{\mathit {end}} \le n_p\rbrace .$$ (Eq. 8)
In the presence of the coarsely annotated $D_z$ when the task of interest is $T_y$ , the research question becomes: how can we train a model to use both $D_z$ and $D_y$ in the most effective way?
Multi-task learning for MixedQA
The multi-task learning approach defines models for $T_y$ and $T_z$ that share some of their parameters. The data for task $T_z$ helps improve the model for $T_y$ via these shared parameters (representations). Multi-task learning with representation sharing is widely used with auxiliary tasks from reconstruction of unlabeled data BIBREF0 to machine translation and syntactic parsing BIBREF3 , and can be used with any task $T_z$ which is potentially related to the main task of interest $T_y$ .
Let $\theta = \begin{bmatrix} \theta _y & \theta _z & \theta _{s} \end{bmatrix}$ be the set of parameters in the two models. $\theta _y$ denotes parameters exclusive to the fine-grained task $T_y$ , $\theta _z$ denotes parameters exclusive to the coarse-grained task $T_z$ , and $\theta _s$ denotes the shared parameters across the two tasks.
Then the multi-task learning objective is to minimize $L(\theta , D_y,D_z)$ :
$$\begin{split} & -\sum _{(x,y) \in D_{y}} \log P(y|x, \theta _s, \theta _y) \\ -~~~\alpha _z &\sum _{(x,z) \in D_{z}} \log P(z|x , \theta _s, \theta _z) \end{split}$$ (Eq. 10)
Here $\alpha _z$ is a trade-off hyper-parameter to balance the objectives of the fine and coarse models.
We apply multi-task learning to question answering by reusing the architecture from min-seo-hajishirzi:2017:Short to define models for both fine-grained short answer selection $T_y$ and coarse-grained paragraph selection $T_z$ . After the two models are trained, only the model for the fine-grained task $T_y$ is used at test time to make predictions for the task of interest.
The shared component with parameters $\theta _s$ maps the sequence of tokens in the document $d$ to continuous representations contextualized with respect to the question $q$ and the tokens in the paragraph $p$ . We denote these representations as $\mathbf {h}(x,\theta _s) = (\mathbf {h}^1(\theta _s),\mathbf {h}^2(\theta _s),\ldots ,\mathbf {h}^{M}(\theta _s)),$
where we omit the dependence on $x$ for simplicity. Each contextualized paragraph token representation is a sequence of contextualized token representations, where $\mathbf {h}^p(\theta _s) = {h_1}^p(\theta _s),\ldots ,{{h}_{n_p}}^p(\theta _s).$
Fine-grained answer selection model
The fine-grained answer selection model $P(y|x,\theta _s,\theta _y)$ uses the same hidden representations $\mathbf {h}(x,\theta _s)$ and makes predictions assuming that the start and end positions of the answer are independent, as in BiDAF BIBREF4 . The output parameters $\theta _y$ contain separate weights for predicting starts and ends of spans: $\theta _y = \begin{bmatrix} \theta _y^{\mathit {start}} & \theta _y^{\mathit {end}} \end{bmatrix}$
The probability of answer start $a_{\mathit {start}}$ in paragraph $a_{\mathit {p}}$ is proportional to $\exp (h(a_{\mathit {start}}, a_{\mathit {p}}, \theta _s)\cdot \theta _y^{\mathit {start}})$ , where $h(a_{\mathit {start}}, a_{\mathit {p}}, \theta _s)$ is the hidden representation of the token $a_{\mathit {start}}$ in paragraph $a_{\mathit {p}}$ , given shared parameters $\theta _s$ . The probability for end of answer positions is defined analogously.
Paragraph answer selection model
The paragraph selection model for task $T_{z}$ uses the same hidden representations $\mathbf {h}(x,\theta _s)$ for the tokens in the document. Because this model assigns scores at the paragraph granularity (as opposed to token granularity), we apply a pooling operation to the token representations to derive single vector paragraph representations. As in BIBREF2 , we use max-pooling over token representations and arrive at $h^p(\theta _s)=\mbox{max}({h_1}^p(\theta _s),\ldots ,{{h}_{n_p}}^p(\theta _s))$
Using the coarse-grained task-specific parameters $\theta _z$ , we define the probability distribution over paragraphs as: $P(a_p = p | x, \theta _s, \theta _z) = \frac{\exp (h^p(\theta _s) \cdot \theta _z)}{\sum _{p^{\prime }}{\exp (h^{p^{\prime }}(\theta _s) \cdot \theta _z)}} $
Latent Variable Methods for MixedQA
We study two types of latent variable methods that capture the dependencies between the fine and coarse tasks explicitly. Unlike the multitask learning algorithm described above, both eliminate the need for parameters specifically for the coarse task $\theta _z$ , since we treat the fine labels as a latent variable in the coarsely annotated data.
The dependencies between the coarse and fine supervision labels can be captured by the following consistency constraints implied by our task definition: $ \begin{split} P(y, z|x) = 0, & \forall y \notin \mathcal {Y}(z, x), \text{ and } \\ P(z|y,x) = 1, & \forall y \in \mathcal {Y}(z, x). \end{split} $
Maximum Marginal Likelihood
For the task of document-level QA, these constraints ensure that a paragraph is labeled as positive iff there exists a positive answer text span inside the paragraph.
The idea of the maximum marginal likelihood method is to define a distribution over coarse labels using the fine-grained model's distribution over fine labels. By expanding the above equations expressing the task dependencies,
$$P(z|x, \theta ) = \sum _{y \in \mathcal {Y}(x)} P(y,z|x, \theta ) = \hspace{-6.0pt}\sum _{y \in \mathcal {Y}(z, x)}\hspace{-6.0pt}P(y|x, \theta )$$ (Eq. 17)
This equation simply says that the probability that a given paragraph $z$ is relevant is the sum of the probabilities of all possible short answer spans within the paragraph.
The objective function for the coarsely labeled data $D_{z}$ can be expressed as a function of the parameters of the fine-grained task model as:
$$\begin{split} -\sum _{(x,z) \in D_{z}} \log \sum _{y \in \mathcal {Y}(z, x)} P(y|x, \theta _s, \theta _y) \end{split}$$ (Eq. 18)
The fine-grained task loss and the coarse-grained task loss are interpolated with a parameter $\alpha _z$ , as for the multi-task approach.
Posterior Distillation
In addition to direct maximization of the marginal likelihood for latent variable models BIBREF5 , prior work has explored EM-based optimization BIBREF6 including generalized EM BIBREF7 , which is applicable to neural models BIBREF8 .
We present a class of optimization algorithms which we term Posterior Distillation, which includes generalized EM for our problem as a special case, and has close connections to knowledge distillation BIBREF9 , BIBREF10 .
We begin by describing an online generalized EM optimization algorithm for the latent variable model from equation ( 17 ) and show how it can be generalized to multiple variants inspired by knowledge distillation with priviledged information BIBREF11 . We refer to the more general approach as Posterior Distillation.
[t] Posterior Distillation Algorithm. [1] not converge Sample a mini-batch $(x_1,y) \sim D_y$ and $(x_2,z) \sim D_z$ Calculate predicted distribution for current $\theta ^{old}$ $P(\hat{y}|x_2, \theta ^{old})$ Correct and renormalize the predicted distribution using the coarse supervision signal by setting $q(\hat{y}|x_2) \propto {\left\lbrace \begin{array}{ll} P(\hat{y}|x_2, \theta ^{old}), \hat{y} \in \mathcal {Y}(z)\\ 0, \hat{y} \notin \mathcal {Y}(z) \end{array}\right.}$
Update $\theta $ by taking a step to minimize - $\log P(y|x_1, \theta )$ + $\alpha _z$ distance( $P(y|x,\theta ), q$ ).
In EM-like algorithms one uses current model parameters $\theta ^{old}$ to make predictions and complete the latent variables in input examples, and then updates the model parameters to maximize the log-likelihood of the completed data. We formalize this procedure for our case below.
Given a coarse example with input $x$ and coarse label $z$ , we first compute the posterior distribution over the fine labels $y$ given $z$ and the current set of parameters $\theta ^{old}$ :
$$P(y | x, z, \theta ^{old}) &= \frac{[[\hbox{$y \in {\cal Y}(x)$}]] \times P(y | x, \theta ^{old})}{\displaystyle \sum _{y \in {\mathcal {Y}}(z, x)} P(y | x, \theta ^{old})}$$ (Eq. 20)
where $[[\cdot ]]$ is the indicator function. In EM, we update the parameters $\theta $ to minimize the negative expected log-likelihood of the fine labels with respect to the posterior distribution: $ Q(\theta , \theta ^{old}) &= -\mathop {\mathbb {E}}_{P(y | x, z, \theta ^{old})} \log P(y | x, \theta )\\ &= -\sum _{y \in {\cal Y}(x)} P(y | x, z, \theta ^{old}) \log P(y | x, \theta ) $
By taking a gradient step towards minimizing $Q(\theta , \theta ^{old})$ with respect to $\theta $ , we arrive at a form of generalized EM BIBREF7 . If the loss $Q$ is computed over a mini-batch, this is a form of online EM.
We propose a variant of this EM algorithm that is inspired by knowledge distillation methods BIBREF9 , BIBREF10 , where a student model learns to minimize the distance between its predictions and a teacher model's predictions. In our case, we can consider the posterior distribution $P(y | x, z, \theta ^{old})$ to be the teacher, and the model distribution $P(y | x, \theta )$ to be the student. Here the teacher distribution is directly derived from the model (student) distribution $P(y | x, \theta ^{old})$ by integrating the information from the coarse label $z$ . The coarse labels can be seen as privileged information BIBREF11 which the student does not condition on directly.
Let us define $Q(\theta , \theta ^{old})$ in a more general form, where it is a general distance function rather than cross-entropy: $ Q(\theta , \theta ^{old}) = \textsc {distance}(P(y | x, z, \theta ^{old}), P(y | x, \theta )) $
We refer to the class of learning objectives in this form as posterior distillation. When the distance function is cross entropy, posterior distillation is equivalent to EM. As is common in distillation techniques BIBREF12 , we can apply other distance functions, such as the squared error. $ Q(\theta , \theta ^{old}) = \sum _{y \in {\cal Y}(x)} \left\Vert P(y | x, z, \theta ^{old}) - P(y | x, \theta ) \right\Vert _2^2 $
In our experiments, we found that squared error outperforms cross entropy consistently.
This algorithm also has a close connection to Posterior Regularization BIBREF13 . The coarse supervision labels $z$ can be integrated using linear expectation constraints on the model posteriors $P(y|x,\theta )$ , and a KL-projection onto the constrained space can be done exactly in closed form using equation 20 . Thus the PR approach in this case is equivalent to posterior distillation with cross-entropy and to EM. Note that the posterior distillation method is more general because it allows additional distance functions.
The combined loss function using both finely and coarsely labeled data to be minimized is:
$$\begin{split} & \sum _{(x,y) \in D_{y}} -\log P(y|x, \theta _s) \\ +~~~\alpha _z &\sum _{(x,z) \in D_{z}} Q(\theta ,\theta ^{old},x,z) \end{split}$$ (Eq. 21)
Figure 2 presents an illustration of the multi-task and posterior distillation approaches for learning from both finely and coarsely labeled data. Algorithm 1 lists the steps of optimization. Each iteration of the loop samples mini-batches from the union of finely and coarsely labeled data and takes a step to minimize the combined loss.
Experiments
We present experiments on question answering using the multi-task and latent variable methods introduced in the prior section.
Mixed supervision data
We focus on the document-level variant of the SQuAD dataset BIBREF1 , as defined by docqa, where given a question and document, the task is to determine the relevant passage and answer span within the passage $(a_p, a_{\mathit {start}}, a_{\mathit {end}})$ . We define finely annotated subsets $D_{y}$ with two different sizes: 5% and 20% of the original dataset. These are paired with non-overlapping subsets of coarsely annotated data $D_{z}$ with sizes 20% and 70% of the original training set, respectively. Both of these settings represent the regime where coarsely annotated data is available in higher volume, because such data can be obtained faster and at lower cost. For both dataset settings, we derive $D_{y}$ and $D_{z}$ from the SQuAD training set, by allocating whole documents with all their corresponding questions to a given subset. In both settings, we also reserve a finely annotated non-overlapping set $\mbox{Dev}_{y}$ , which is used to select optimal hyperparameters for each method. We report final performance metrics on $\mbox{Test}_{y}$ , which is the unseen SQuAD development set.
QA model
We build on the state-of-the-art publicly available question answering system by docqa. The system extends BiDAF BIBREF4 with self-attention and performs well on document-level QA. We reuse all hyperparameters from docqa with the exception of number of paragraphs sampled in training: 8 instead of 4. Using more negative examples was important when learning from both fine and coarse annotations. The model uses character embeddings with dimension 50, pre-trained Glove embeddings, and hidden units for bi-directional GRU encoders with size 100. Adadelta is used for optimization for all methods. We tune two hyperparameters separately for each condition based on the held-out set: (1) $\alpha \in \lbrace .01, .1, .5, 1, 5, 10, 100 \rbrace $ , the weight of the coarse loss, and (2) the number of steps for early stopping. The training time for all methods using both coarse and fine supervision is comparable. We use Adadelta for optimization for all methods.
Results
We report results evaluating the impact of using coarsely annotated data in the two dataset conditions in Figure 3 . There are two groups of rows corresponding to the two data sizes: in the smaller setting, only 5% of the original fine-grained data is used, and in the medium setting, 20% of the fine-grained data is used. The first row in each group indicates the performance when using only finely labeled fully supervised data. The column Fine-F1 indicates the performance metric of interest – the test set performance on document-level short answer selection. The next rows indicate the performance of a multi-task and the best latent variable method when using the finely labeled data plus the additional coarsely annotated datasets. The ceiling performance in each group shows the oracle achieved by a model also looking at the gold fine-grained labels for the data that the rest of the models see with only coarse paragraph-level annotation. The column Gain indicates the relative error reduction of each model compared to the supervised-only baseline with respect to the ceiling upper bound. As we can see all models benefit from coarsely labeled data and achieve at least 20% error reduction. The best latent variable method (Posterior Distillation with squared error distance) significantly outperforms the multi-task approach, achieving up to 41% relative gain.
Figure 4 compares the performance of the three different optimization methods using latent fine-grained answer variables for coarsely annotated data. Here we inlcude an additional last column reporting performance on an easier task where the correct answer paragraph is given at test time, and the model only needs to pick out the short answer within the given paragraph. We include this measurement to observe whether models are improving just by picking out relevant paragraphs or also by selecting the finer-grained short answers within them. Since EM and MML are known to optimize the same function, it is unsurprising that MML and PD with cross-entropy (equivalent to EM) perform similarly. For posterior distillation, we observe substantially better performance with the squared error as the distance function, particularly in the second setting, where there is more coarsely annotated data.
To gain more insight into the behavior of the different methods using coarsely annotated data, we measured properties of the predictive distributions $P(y|x,\theta )$ for the three methods on the dataset used with coarse labels in training $D_{70coarse}$ . The results are shown in Figure 5 . For models MTL, MML, PD( $xent$ ), and PD( $err^2$ ), trained on finely labeled $D_{20fine}$ and coarsely labeled $D_{70coarse}$ , we study the predictive distributions $P(y|x,\theta ^M)$ for the four model types $M$ . We measure the properties of these distributions on the dataset $D_{70fine}$ , which is the finely labeled version of the same (question, document)-pairs $D_{70}$ as $D_{70coarse}$0 . Note that none of the models see the fine-grained short answer labels for $D_{70coarse}$1 in training since they only observe paragraph-level relevance annotations. Nevertheless, the models can assign a probability distribution over fine-grained labels in the documents, and we can measure the peakiness (entropy) of this distribution, as well as see how it compares to the gold hidden label distribution.
The first column in the table reports the entropies of the predictive distributions for the four trained models (using the fine task model for the multi-task method MTL). We can see that multi-task method MTL and PD( $xent$ ) (which is equivalent to generalized EM) have lowest entropy, and are most confident about their short answer predictions. MML marginalizes over possible fine answers, resulting in flatter predictive distributions which spread mass among multiple plausible answer positions. The best-performing method PD( $err^2$ ) is somewhere in between and maintains more uncertainty. The next two columns in the Table look at the cross-entropy ( $xent$ ) and squared error ( $err^2$ ) distances of the predictive distributions with respect to the gold one. The gold label distribution has mass of one on a single point indicating the correct fine answer positions. Note that none of the models have seen this gold distribution during training and have thus not been trained to minimize these distances (the PD latent variable models are trained to minimize distance with respect to projected model distributions given coarse passage labels $z$ ). We can see that the predictive distribution of the best method PD( $err^2$ ) is closest to the gold labels. The maximum marginal likelihood method MML comes second in approaching the gold distribution. The multi-task approach lags behind others in distance to the fine-grained gold labels, but comes first in the measurement in the last column, Passage-MRR. That column indicates the mean reciprocal rank of the correct gold passage according to the model. Here passages are ranked by the score of the highest-scoring short answer span within the passage. This measurement indicates that the multi-task model is able to learn to rank passages correctly from the coarse-grained passage-level annotation, but has a harder time to transfer this improvement to the task of picking fine-grained short answers within the passages.
Text-based Question Answering
In span-based reading comprehension, a system must be able to extract a plausible text-span answer for a given question from a context document or paragraph BIBREF1 , BIBREF14 , BIBREF15 . Most work has focused on selecting short answers given relevant paragraphs, but datasets and works considering the more realistic task of selection from full documents are starting to appear BIBREF14 .
Sentence selection or paragraph selection datasets test whether a system can correctly rank texts that are relevant for answering a question higher than texts that do not. Wang2007EMNLP constructed the QASent dataset based on questions from TREC 8-13 QA tracks. WikiQA BIBREF16 associates questions from Bing search query log with all the sentences in the Wikipedia summary paragraph which is then labeled by crowd workers. Most state-of-the-art models for both types of tasks make use of neural network modules to construct and compare representations for a question and the possible answers. We build on a near state-of-the-art baseline model and evaluate on a document-level short question answering task.
Data Augmentation and Multi-Task Learning in QA
There have been several works addressing the paucity of annotated data for QA. Data noisily annotated with short answer spans has been generated automatically through distant supervision and shown to be useful BIBREF14 . Unlabeled text and data augmentation through machine translation have been used to improve model quality BIBREF17 , BIBREF18 , BIBREF19 . min-seo-hajishirzi:2017:Short used short-answer annotations in SQuAD BIBREF1 to improve paragraph-level question answering for WikiQA BIBREF16 . To the best of our knowledge, there has been no prior work using QA data annotated at the paragraph level to improve models for short question answering. | The system extends BiDAF BIBREF4 with self-attention |
21656039994cab07f79e89553cbecc31ba9853d4 | 21656039994cab07f79e89553cbecc31ba9853d4_0 | Q: What datasets have this method been evaluated on?
Text: Introduction
Question answering (QA) systems can provide most value for users by showing them a fine-grained short answer (answer span) in a context that supports the answer (paragraph in a document). However, fine-grained short answer annotations for question answering are costly to obtain, whereas non-expert annotators can annotate coarse-grained passages or documents faster and with higher accuracy. In addition, coarse-grained annotations are often freely available from community forums such as Quora. Therefore, methods that can learn to select short answers based on more abundant coarsely annotated paragraph-level data can potentially bring significant improvements. As an example of the two types of annotation, Figure 1 shows on the left a question with corresponding short answer annotation (underlined short answer) in a document, and on the right a question with a document annotated at the coarse-grained paragraph relevance level. In this work we study methods for learning short answer models from small amounts of data annotated at the short answer level and larger amounts of data annotated at the paragraph level. min-seo-hajishirzi:2017:Short recently studied a related problem of transferring knowledge from a fine-grained QA model to a coarse-grained model via multi-task learning and showed that finely annotated data can help improve performance on the coarse-grained task. We investigate the opposite and arguably much more challenging direction: improving fine-grained models using coarse-grained data.
We explore alternatives to the standard approach of multi-task learning via representation sharing BIBREF0 by leveraging the known correspondences between the coarse and fine-grained tasks. In the standard representation sharing approach, the dependencies between the fine-grained and coarse-grained tasks are modeled implicitly. The model must learn representations that are useful for all tasks without knowing how they relate to each other. However, in the scenario of learning from both fine and coarse supervision, the dependencies between the tasks can be modeled explicitly. For example, if a paragraph answers a question, we know that there exists a fine-grained answer span in the paragraph, providing strong constraints on the possible fine-grained answers for the question.
We evaluate a multi-task approach and three algorithms that explicitly model the task dependencies. We perform experiments on document-level variants of the SQuAD dataset BIBREF1 . The contributions for our papers are:
Task Definitions
The fine-grained short question answering task asks to select an answer span in a document containing multiple paragraphs. In the left example in Figure 1, the short answer to the question What was Nikola Tesla's ethnicity? is the phrase Serbian in the first paragraph in the document.
The coarse-grained labels indicate the relevance of document paragraphs. In the right example in Figure 1, the labels indicate whether or not the paragraphs in a given document contain the answers for the given question What was Martin Luther's nationality? without specifying the answer spans.
The goal of our paper is to design methods to learn from both fine-grained and coarse-grained labeled data, to improve systems for fine-grained QA.
Formal Definition
We define the fine-grained task of interest $T_y$ as predicting outputs $y$ from a set of possible outputs ${\cal {Y}}(x)$ given inputs $x$ . We say that a task $T_z$ to predict outputs $z$ given inputs $x$ is a coarse-grained counterpart of $T_y$ , iff each coarse label $z$ determines a sub-set of possible labels ${\cal {Y}}(z,x) \subset {\cal {Y}}(x)$ , and each fine label $y$0 has a deterministically corresponding single coarse label $y$1 . We refer to the fine-grained and coarse-grained training data as $y$2 and $y$3 respectively.
For our application of document-level QA, $T_y$ is the task of selecting a short answer span from the document, and $T_z$ is the task of selecting a paragraph from the document. The input $x$ to both tasks is a question-document pair. Each document is a sequence of $M$ paragraphs, and each paragraph with index $p$ (where $1 \le p \le M$ ) is a sequence of $n_p$ tokens. The set of possible outputs for the fine-grained task $T_y$ is the set of all phrases (contiguous substring spans) in all document paragraphs. The possible outputs for the coarse task $T_z$ are the paragraph indices $p$ . It is clear that each paragraph output $T_z$0 determines a subset of possible outputs $T_z$1 (the phrases in the paragraph).
Fine-grained annotation is provided as $y=(a_{\mathit {p}}, a_{\mathit {start}}, a_{\mathit {end}})$ , where $a_{\mathit {p}}$ indicates the index of the paragraph containing the answer, and $a_{\mathit {start}}, a_{\mathit {end}}$ respectively indicate the start and end position of the short answer.
Paragraph-level supervision is provided as $z=(a_{\mathit {p}}, \_,\_)$ , only indicating the paragraph index of the answer, without the start and end token indices of the answer span. The coarse labels $z$ in this case limit the set of possible labels $y$ for $x$ to:
$${\cal {Y}}(z,x) = \lbrace (a_{\mathit {p}}, a^{\prime }_{\mathit {start}}, a^{\prime }_{\mathit {end}})~|~1 \le a^{\prime }_{\mathit {start}} \le a^{\prime }_{\mathit {end}} \le n_p\rbrace .$$ (Eq. 8)
In the presence of the coarsely annotated $D_z$ when the task of interest is $T_y$ , the research question becomes: how can we train a model to use both $D_z$ and $D_y$ in the most effective way?
Multi-task learning for MixedQA
The multi-task learning approach defines models for $T_y$ and $T_z$ that share some of their parameters. The data for task $T_z$ helps improve the model for $T_y$ via these shared parameters (representations). Multi-task learning with representation sharing is widely used with auxiliary tasks from reconstruction of unlabeled data BIBREF0 to machine translation and syntactic parsing BIBREF3 , and can be used with any task $T_z$ which is potentially related to the main task of interest $T_y$ .
Let $\theta = \begin{bmatrix} \theta _y & \theta _z & \theta _{s} \end{bmatrix}$ be the set of parameters in the two models. $\theta _y$ denotes parameters exclusive to the fine-grained task $T_y$ , $\theta _z$ denotes parameters exclusive to the coarse-grained task $T_z$ , and $\theta _s$ denotes the shared parameters across the two tasks.
Then the multi-task learning objective is to minimize $L(\theta , D_y,D_z)$ :
$$\begin{split} & -\sum _{(x,y) \in D_{y}} \log P(y|x, \theta _s, \theta _y) \\ -~~~\alpha _z &\sum _{(x,z) \in D_{z}} \log P(z|x , \theta _s, \theta _z) \end{split}$$ (Eq. 10)
Here $\alpha _z$ is a trade-off hyper-parameter to balance the objectives of the fine and coarse models.
We apply multi-task learning to question answering by reusing the architecture from min-seo-hajishirzi:2017:Short to define models for both fine-grained short answer selection $T_y$ and coarse-grained paragraph selection $T_z$ . After the two models are trained, only the model for the fine-grained task $T_y$ is used at test time to make predictions for the task of interest.
The shared component with parameters $\theta _s$ maps the sequence of tokens in the document $d$ to continuous representations contextualized with respect to the question $q$ and the tokens in the paragraph $p$ . We denote these representations as $\mathbf {h}(x,\theta _s) = (\mathbf {h}^1(\theta _s),\mathbf {h}^2(\theta _s),\ldots ,\mathbf {h}^{M}(\theta _s)),$
where we omit the dependence on $x$ for simplicity. Each contextualized paragraph token representation is a sequence of contextualized token representations, where $\mathbf {h}^p(\theta _s) = {h_1}^p(\theta _s),\ldots ,{{h}_{n_p}}^p(\theta _s).$
Fine-grained answer selection model
The fine-grained answer selection model $P(y|x,\theta _s,\theta _y)$ uses the same hidden representations $\mathbf {h}(x,\theta _s)$ and makes predictions assuming that the start and end positions of the answer are independent, as in BiDAF BIBREF4 . The output parameters $\theta _y$ contain separate weights for predicting starts and ends of spans: $\theta _y = \begin{bmatrix} \theta _y^{\mathit {start}} & \theta _y^{\mathit {end}} \end{bmatrix}$
The probability of answer start $a_{\mathit {start}}$ in paragraph $a_{\mathit {p}}$ is proportional to $\exp (h(a_{\mathit {start}}, a_{\mathit {p}}, \theta _s)\cdot \theta _y^{\mathit {start}})$ , where $h(a_{\mathit {start}}, a_{\mathit {p}}, \theta _s)$ is the hidden representation of the token $a_{\mathit {start}}$ in paragraph $a_{\mathit {p}}$ , given shared parameters $\theta _s$ . The probability for end of answer positions is defined analogously.
Paragraph answer selection model
The paragraph selection model for task $T_{z}$ uses the same hidden representations $\mathbf {h}(x,\theta _s)$ for the tokens in the document. Because this model assigns scores at the paragraph granularity (as opposed to token granularity), we apply a pooling operation to the token representations to derive single vector paragraph representations. As in BIBREF2 , we use max-pooling over token representations and arrive at $h^p(\theta _s)=\mbox{max}({h_1}^p(\theta _s),\ldots ,{{h}_{n_p}}^p(\theta _s))$
Using the coarse-grained task-specific parameters $\theta _z$ , we define the probability distribution over paragraphs as: $P(a_p = p | x, \theta _s, \theta _z) = \frac{\exp (h^p(\theta _s) \cdot \theta _z)}{\sum _{p^{\prime }}{\exp (h^{p^{\prime }}(\theta _s) \cdot \theta _z)}} $
Latent Variable Methods for MixedQA
We study two types of latent variable methods that capture the dependencies between the fine and coarse tasks explicitly. Unlike the multitask learning algorithm described above, both eliminate the need for parameters specifically for the coarse task $\theta _z$ , since we treat the fine labels as a latent variable in the coarsely annotated data.
The dependencies between the coarse and fine supervision labels can be captured by the following consistency constraints implied by our task definition: $ \begin{split} P(y, z|x) = 0, & \forall y \notin \mathcal {Y}(z, x), \text{ and } \\ P(z|y,x) = 1, & \forall y \in \mathcal {Y}(z, x). \end{split} $
Maximum Marginal Likelihood
For the task of document-level QA, these constraints ensure that a paragraph is labeled as positive iff there exists a positive answer text span inside the paragraph.
The idea of the maximum marginal likelihood method is to define a distribution over coarse labels using the fine-grained model's distribution over fine labels. By expanding the above equations expressing the task dependencies,
$$P(z|x, \theta ) = \sum _{y \in \mathcal {Y}(x)} P(y,z|x, \theta ) = \hspace{-6.0pt}\sum _{y \in \mathcal {Y}(z, x)}\hspace{-6.0pt}P(y|x, \theta )$$ (Eq. 17)
This equation simply says that the probability that a given paragraph $z$ is relevant is the sum of the probabilities of all possible short answer spans within the paragraph.
The objective function for the coarsely labeled data $D_{z}$ can be expressed as a function of the parameters of the fine-grained task model as:
$$\begin{split} -\sum _{(x,z) \in D_{z}} \log \sum _{y \in \mathcal {Y}(z, x)} P(y|x, \theta _s, \theta _y) \end{split}$$ (Eq. 18)
The fine-grained task loss and the coarse-grained task loss are interpolated with a parameter $\alpha _z$ , as for the multi-task approach.
Posterior Distillation
In addition to direct maximization of the marginal likelihood for latent variable models BIBREF5 , prior work has explored EM-based optimization BIBREF6 including generalized EM BIBREF7 , which is applicable to neural models BIBREF8 .
We present a class of optimization algorithms which we term Posterior Distillation, which includes generalized EM for our problem as a special case, and has close connections to knowledge distillation BIBREF9 , BIBREF10 .
We begin by describing an online generalized EM optimization algorithm for the latent variable model from equation ( 17 ) and show how it can be generalized to multiple variants inspired by knowledge distillation with priviledged information BIBREF11 . We refer to the more general approach as Posterior Distillation.
[t] Posterior Distillation Algorithm. [1] not converge Sample a mini-batch $(x_1,y) \sim D_y$ and $(x_2,z) \sim D_z$ Calculate predicted distribution for current $\theta ^{old}$ $P(\hat{y}|x_2, \theta ^{old})$ Correct and renormalize the predicted distribution using the coarse supervision signal by setting $q(\hat{y}|x_2) \propto {\left\lbrace \begin{array}{ll} P(\hat{y}|x_2, \theta ^{old}), \hat{y} \in \mathcal {Y}(z)\\ 0, \hat{y} \notin \mathcal {Y}(z) \end{array}\right.}$
Update $\theta $ by taking a step to minimize - $\log P(y|x_1, \theta )$ + $\alpha _z$ distance( $P(y|x,\theta ), q$ ).
In EM-like algorithms one uses current model parameters $\theta ^{old}$ to make predictions and complete the latent variables in input examples, and then updates the model parameters to maximize the log-likelihood of the completed data. We formalize this procedure for our case below.
Given a coarse example with input $x$ and coarse label $z$ , we first compute the posterior distribution over the fine labels $y$ given $z$ and the current set of parameters $\theta ^{old}$ :
$$P(y | x, z, \theta ^{old}) &= \frac{[[\hbox{$y \in {\cal Y}(x)$}]] \times P(y | x, \theta ^{old})}{\displaystyle \sum _{y \in {\mathcal {Y}}(z, x)} P(y | x, \theta ^{old})}$$ (Eq. 20)
where $[[\cdot ]]$ is the indicator function. In EM, we update the parameters $\theta $ to minimize the negative expected log-likelihood of the fine labels with respect to the posterior distribution: $ Q(\theta , \theta ^{old}) &= -\mathop {\mathbb {E}}_{P(y | x, z, \theta ^{old})} \log P(y | x, \theta )\\ &= -\sum _{y \in {\cal Y}(x)} P(y | x, z, \theta ^{old}) \log P(y | x, \theta ) $
By taking a gradient step towards minimizing $Q(\theta , \theta ^{old})$ with respect to $\theta $ , we arrive at a form of generalized EM BIBREF7 . If the loss $Q$ is computed over a mini-batch, this is a form of online EM.
We propose a variant of this EM algorithm that is inspired by knowledge distillation methods BIBREF9 , BIBREF10 , where a student model learns to minimize the distance between its predictions and a teacher model's predictions. In our case, we can consider the posterior distribution $P(y | x, z, \theta ^{old})$ to be the teacher, and the model distribution $P(y | x, \theta )$ to be the student. Here the teacher distribution is directly derived from the model (student) distribution $P(y | x, \theta ^{old})$ by integrating the information from the coarse label $z$ . The coarse labels can be seen as privileged information BIBREF11 which the student does not condition on directly.
Let us define $Q(\theta , \theta ^{old})$ in a more general form, where it is a general distance function rather than cross-entropy: $ Q(\theta , \theta ^{old}) = \textsc {distance}(P(y | x, z, \theta ^{old}), P(y | x, \theta )) $
We refer to the class of learning objectives in this form as posterior distillation. When the distance function is cross entropy, posterior distillation is equivalent to EM. As is common in distillation techniques BIBREF12 , we can apply other distance functions, such as the squared error. $ Q(\theta , \theta ^{old}) = \sum _{y \in {\cal Y}(x)} \left\Vert P(y | x, z, \theta ^{old}) - P(y | x, \theta ) \right\Vert _2^2 $
In our experiments, we found that squared error outperforms cross entropy consistently.
This algorithm also has a close connection to Posterior Regularization BIBREF13 . The coarse supervision labels $z$ can be integrated using linear expectation constraints on the model posteriors $P(y|x,\theta )$ , and a KL-projection onto the constrained space can be done exactly in closed form using equation 20 . Thus the PR approach in this case is equivalent to posterior distillation with cross-entropy and to EM. Note that the posterior distillation method is more general because it allows additional distance functions.
The combined loss function using both finely and coarsely labeled data to be minimized is:
$$\begin{split} & \sum _{(x,y) \in D_{y}} -\log P(y|x, \theta _s) \\ +~~~\alpha _z &\sum _{(x,z) \in D_{z}} Q(\theta ,\theta ^{old},x,z) \end{split}$$ (Eq. 21)
Figure 2 presents an illustration of the multi-task and posterior distillation approaches for learning from both finely and coarsely labeled data. Algorithm 1 lists the steps of optimization. Each iteration of the loop samples mini-batches from the union of finely and coarsely labeled data and takes a step to minimize the combined loss.
Experiments
We present experiments on question answering using the multi-task and latent variable methods introduced in the prior section.
Mixed supervision data
We focus on the document-level variant of the SQuAD dataset BIBREF1 , as defined by docqa, where given a question and document, the task is to determine the relevant passage and answer span within the passage $(a_p, a_{\mathit {start}}, a_{\mathit {end}})$ . We define finely annotated subsets $D_{y}$ with two different sizes: 5% and 20% of the original dataset. These are paired with non-overlapping subsets of coarsely annotated data $D_{z}$ with sizes 20% and 70% of the original training set, respectively. Both of these settings represent the regime where coarsely annotated data is available in higher volume, because such data can be obtained faster and at lower cost. For both dataset settings, we derive $D_{y}$ and $D_{z}$ from the SQuAD training set, by allocating whole documents with all their corresponding questions to a given subset. In both settings, we also reserve a finely annotated non-overlapping set $\mbox{Dev}_{y}$ , which is used to select optimal hyperparameters for each method. We report final performance metrics on $\mbox{Test}_{y}$ , which is the unseen SQuAD development set.
QA model
We build on the state-of-the-art publicly available question answering system by docqa. The system extends BiDAF BIBREF4 with self-attention and performs well on document-level QA. We reuse all hyperparameters from docqa with the exception of number of paragraphs sampled in training: 8 instead of 4. Using more negative examples was important when learning from both fine and coarse annotations. The model uses character embeddings with dimension 50, pre-trained Glove embeddings, and hidden units for bi-directional GRU encoders with size 100. Adadelta is used for optimization for all methods. We tune two hyperparameters separately for each condition based on the held-out set: (1) $\alpha \in \lbrace .01, .1, .5, 1, 5, 10, 100 \rbrace $ , the weight of the coarse loss, and (2) the number of steps for early stopping. The training time for all methods using both coarse and fine supervision is comparable. We use Adadelta for optimization for all methods.
Results
We report results evaluating the impact of using coarsely annotated data in the two dataset conditions in Figure 3 . There are two groups of rows corresponding to the two data sizes: in the smaller setting, only 5% of the original fine-grained data is used, and in the medium setting, 20% of the fine-grained data is used. The first row in each group indicates the performance when using only finely labeled fully supervised data. The column Fine-F1 indicates the performance metric of interest – the test set performance on document-level short answer selection. The next rows indicate the performance of a multi-task and the best latent variable method when using the finely labeled data plus the additional coarsely annotated datasets. The ceiling performance in each group shows the oracle achieved by a model also looking at the gold fine-grained labels for the data that the rest of the models see with only coarse paragraph-level annotation. The column Gain indicates the relative error reduction of each model compared to the supervised-only baseline with respect to the ceiling upper bound. As we can see all models benefit from coarsely labeled data and achieve at least 20% error reduction. The best latent variable method (Posterior Distillation with squared error distance) significantly outperforms the multi-task approach, achieving up to 41% relative gain.
Figure 4 compares the performance of the three different optimization methods using latent fine-grained answer variables for coarsely annotated data. Here we inlcude an additional last column reporting performance on an easier task where the correct answer paragraph is given at test time, and the model only needs to pick out the short answer within the given paragraph. We include this measurement to observe whether models are improving just by picking out relevant paragraphs or also by selecting the finer-grained short answers within them. Since EM and MML are known to optimize the same function, it is unsurprising that MML and PD with cross-entropy (equivalent to EM) perform similarly. For posterior distillation, we observe substantially better performance with the squared error as the distance function, particularly in the second setting, where there is more coarsely annotated data.
To gain more insight into the behavior of the different methods using coarsely annotated data, we measured properties of the predictive distributions $P(y|x,\theta )$ for the three methods on the dataset used with coarse labels in training $D_{70coarse}$ . The results are shown in Figure 5 . For models MTL, MML, PD( $xent$ ), and PD( $err^2$ ), trained on finely labeled $D_{20fine}$ and coarsely labeled $D_{70coarse}$ , we study the predictive distributions $P(y|x,\theta ^M)$ for the four model types $M$ . We measure the properties of these distributions on the dataset $D_{70fine}$ , which is the finely labeled version of the same (question, document)-pairs $D_{70}$ as $D_{70coarse}$0 . Note that none of the models see the fine-grained short answer labels for $D_{70coarse}$1 in training since they only observe paragraph-level relevance annotations. Nevertheless, the models can assign a probability distribution over fine-grained labels in the documents, and we can measure the peakiness (entropy) of this distribution, as well as see how it compares to the gold hidden label distribution.
The first column in the table reports the entropies of the predictive distributions for the four trained models (using the fine task model for the multi-task method MTL). We can see that multi-task method MTL and PD( $xent$ ) (which is equivalent to generalized EM) have lowest entropy, and are most confident about their short answer predictions. MML marginalizes over possible fine answers, resulting in flatter predictive distributions which spread mass among multiple plausible answer positions. The best-performing method PD( $err^2$ ) is somewhere in between and maintains more uncertainty. The next two columns in the Table look at the cross-entropy ( $xent$ ) and squared error ( $err^2$ ) distances of the predictive distributions with respect to the gold one. The gold label distribution has mass of one on a single point indicating the correct fine answer positions. Note that none of the models have seen this gold distribution during training and have thus not been trained to minimize these distances (the PD latent variable models are trained to minimize distance with respect to projected model distributions given coarse passage labels $z$ ). We can see that the predictive distribution of the best method PD( $err^2$ ) is closest to the gold labels. The maximum marginal likelihood method MML comes second in approaching the gold distribution. The multi-task approach lags behind others in distance to the fine-grained gold labels, but comes first in the measurement in the last column, Passage-MRR. That column indicates the mean reciprocal rank of the correct gold passage according to the model. Here passages are ranked by the score of the highest-scoring short answer span within the passage. This measurement indicates that the multi-task model is able to learn to rank passages correctly from the coarse-grained passage-level annotation, but has a harder time to transfer this improvement to the task of picking fine-grained short answers within the passages.
Text-based Question Answering
In span-based reading comprehension, a system must be able to extract a plausible text-span answer for a given question from a context document or paragraph BIBREF1 , BIBREF14 , BIBREF15 . Most work has focused on selecting short answers given relevant paragraphs, but datasets and works considering the more realistic task of selection from full documents are starting to appear BIBREF14 .
Sentence selection or paragraph selection datasets test whether a system can correctly rank texts that are relevant for answering a question higher than texts that do not. Wang2007EMNLP constructed the QASent dataset based on questions from TREC 8-13 QA tracks. WikiQA BIBREF16 associates questions from Bing search query log with all the sentences in the Wikipedia summary paragraph which is then labeled by crowd workers. Most state-of-the-art models for both types of tasks make use of neural network modules to construct and compare representations for a question and the possible answers. We build on a near state-of-the-art baseline model and evaluate on a document-level short question answering task.
Data Augmentation and Multi-Task Learning in QA
There have been several works addressing the paucity of annotated data for QA. Data noisily annotated with short answer spans has been generated automatically through distant supervision and shown to be useful BIBREF14 . Unlabeled text and data augmentation through machine translation have been used to improve model quality BIBREF17 , BIBREF18 , BIBREF19 . min-seo-hajishirzi:2017:Short used short-answer annotations in SQuAD BIBREF1 to improve paragraph-level question answering for WikiQA BIBREF16 . To the best of our knowledge, there has been no prior work using QA data annotated at the paragraph level to improve models for short question answering. | document-level variants of the SQuAD dataset |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.