id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
a4d8fdcaa8adf99bdd1d7224f1a85c610659a9d3 | a4d8fdcaa8adf99bdd1d7224f1a85c610659a9d3_0 | Q: When they say "comparable performance", how much of a performance drop do these new embeddings result in?
Text: Introduction
Knowledge Graphs such as Freebase, WordNet etc. have become important resources for supporting many AI applications like web search, Q&A etc. They store a collection of facts in the form of a graph. The nodes in the graph represent real world entities such as Roger Federer, Tennis, United States etc while the edges represent relationships between them.
These KGs have grown huge, but they are still not complete BIBREF1 . Hence the task of inferring new facts becomes important. Many vector space models have been proposed which can perform reasoning over KGs efficiently BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF0 , BIBREF1 etc. These methods learn representations for entities and relations as vectors in a vector space, capturing global information about the KG. The task of KG inference is then defined as operations over these vectors. Some of these methods like BIBREF0 , BIBREF1 are capable of exploiting additional text data apart from the KG, resulting in better representations.
Although these methods have shown good performance in applications, they don't address the problem of understanding semantics of individual dimensions of the KG embedding. A recent work BIBREF6 addressed the problem of learning semantic features for KGs. However, they don't directly use vector space modeling.
In this work, we focus on incorporating interpretability in KG embeddings. Specifically, we aim to learn interpretable embeddings for KG entities by incorporating additional entity co-occurrence statistics from text data. This work is motivated by BIBREF7 who presented automated methods for evaluating topics learned via topic modelling methods. We adapt these measures for the vector space model and propose a method to directly maximize them while learning KG embedding. To the best of our knowledge, this work presents the first regularization term which induces interpretability in KG embeddings.
Related Work
Several methods have been proposed for learning KG embeddings. They differ on the modeling of entities and relations, usage of text data and interpretability of the learned embeddings. We summarize some of these methods in following sections.
Vector-space models for KG Embeddings
A very effective and powerful set of models are based on translation vectors. These models represent entities as vectors in $d$ -dimensional space, $\mathbb {R}^d$ and relations as translation vectors from head entity to tail entity, in either same or a projected space. TransE BIBREF2 is one of the initial works, which was later improved by many works [ BIBREF3 , BIBREF4 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ]. Also, there are methods which are able to incorporate text data while learning KG embeddings. BIBREF0 is one such method, which assumes a combined universal schema of relations from KG as well as text. BIBREF1 further improves the performance by sharing parameters among similar textual relations.
Interpretability of Embedding
While the vector space models perform well in many tasks, the semantics of learned representations are not directly clear. This problem for word embeddings was addressed by BIBREF12 where they proposed a set of constraints inducing interpretability. However, its adaptation for KG embeddings hasn't been addressed. A recent work BIBREF6 addressed a similar problem, where they learn coherent semantic features for entities and relations in KG. Our method differs from theirs in the following two aspects. Firstly, we use vector space modeling leading directly to KG embeddings while they need to infer KG embeddings from their probabilistic model. Second, we incorporate additional information about entities which helps in learning interpretable embeddings.
Proposed Method
We are interested in inducing interpretability in KG embeddings and regularization is one good way to do it. So we want to look at novel regularizers in KG embeddings. Hence, we explore a measure of coherence proposed in BIBREF7 . This measure allows automated evaluation of the quality of topics learned by topic modeling methods by using additional Point-wise Mutual Information (PMI) for word pairs. It was also shown to have high correlation with human evaluation of topics.
Based on this measure of coherence, we propose a regularization term. This term can be used with existing KG embedding methods (eg BIBREF0 ) for inducing interpretability. It is described in the following sections.
Coherence
In topic models, coherence of a topic can be determined by semantic relatedness among top entities within the topic. This idea can also be used in vector space models by treating dimensions of the vector space as topics. With this assumption, we can use a measure of coherence defined in following section for evaluating interpretability of the embeddings.
$Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it.
Coherence for top $k$ entities along dimension $l$ is defined as follows:
$$Coherence@k^{(l)} = \sum _{i=2}^{k}\sum _{j=1}^{i-1}{p_{ij}}$$ (Eq. 5)
where $p_{ij}$ is PMI score between entities $e_i$ and $e_j$ extracted from text data. $Coherence@k$ for the entity embedding matrix $\theta _e$ is defined as the average over all dimensions.
$$Coherence@k = \frac{1}{d} \sum _{l=1}^{d} Coherence@k^{(l)}$$ (Eq. 6)
We want to learn an embedding matrix $\theta _e$ which has high coherence (i.e. which maximizes $Coherence@k$ ). Since $\theta _e$ changes during training, the set of top $k$ entities along each dimension varies over iterations. Hence, directly maximizing $Coherence@k$ seems to be tricky.
An alternate approach could be to promote higher values for entity pairs having high PMI score $p_{ij}$ . This will result in an embedding matrix $\theta _e$ with a high value of $Coherence@k$ since high PMI entity pairs are more likely to be among top $k$ entities.
This idea can be captured by following coherence term
$$\mathcal {C}(\theta _e, P) = \sum _{i=2}^{n}\sum _{j=1}^{i-1} \left\Vert v(e_i)^\intercal v(e_j) - p_{ij} \right\Vert ^2$$ (Eq. 8)
where $P$ is entity-pair PMI matrix and $v(e)$ denote vector for entity $e$ . This term can be used in the objective function defined in Equation 13
Entity Model (Model-E)
We use the Entity Model proposed in BIBREF0 for learning KG embeddings. This model assumes a vector $v(e)$ for each entity and two vectors $v_s(r)$ and $v_o(r)$ for each relation of the KG. The score for the triple $(e_s, r, e_o)$ is given by,
$$f(e_s, r, e_o) = v(e_s)^\intercal v_s(r) + v(e_o)^\intercal v_o(r)$$ (Eq. 10)
Training these vectors requires incorrect triples. So, we use the closed world assumption. For each triple $t \in \mathcal {T}$ , we create two negative triples $t^-_o$ and $t^-_s$ by corrupting the object and subject of the triples respectively such that the corrupted triples don't appear in training, test or validation data. The loss for a triple pair is defined as $loss(t, t^-) = - \log (\sigma (f(t) - f(t^-)))$ . Then, the aggregate loss function is defined as
$$L(\theta _e, \theta _r, \mathcal {T}) = \frac{1}{|\mathcal {T}|}\sum _{t\in \mathcal {T}} \left(loss(t, t^-_o) + loss(t, t^-_s) \right)$$ (Eq. 11)
Objective
The overall loss function can be written as follows:
$$L(\theta _e, \theta _r, \mathcal {T}) + \lambda _c \mathcal {C}(\theta _e, P) + \lambda _r \mathcal {R}(\theta _e, \theta _r)$$ (Eq. 13)
Where $\mathcal {R}(\theta _e, \theta _r) = \frac{1}{2}\left(\left\Vert \theta _e\right\Vert ^2+\left\Vert \theta _r\right\Vert ^2\right)$ is the $L2$ regularization term and $\lambda _c$ and $\lambda _r$ are hyper-parameters controlling the trade-off among different terms in the objective function.
Datasets
We use the FB15k-237 BIBREF13 dataset for experiments. It contains 14541 entities and 237 relations. The triples are split into training, validation and test set having 272115, 17535 and 20466 triples respectively. For extracting entity co-occurrences, we use the textual relations used in BIBREF1 . It contains around 3.7 millions textual triples, which we use for calculating PMI for entity pairs.
Experimental Setup
We use the method proposed in BIBREF0 as the baseline. Please refer to Section "Entity Model (Model-E)" for more details. For evaluating the learned embeddings, we test them on different tasks. All the hyper-parameters are tuned using performance (MRR) on validation data. We use 100 dimensions after cross validating among 50, 100 and 200 dimensions. For regularization, we use $\lambda _r = 0.01$ (from $10,1,0.1,0.01$ ) and $\lambda _c = 0.01$ (from $10,1,0.1,0.01$ ) for $L2$ and coherence regularization respectively. We use multiple random initializations sampled from a Gaussian distribution. For optimization, we use gradient descent and stop optimization when gradient becomes 0 upto 3 decimal places. The final performance measures are reported for test data.
Results
In following sections, we compare the performance of the proposed method with the baseline method in different tasks. Please refer to Table 1 for results.
For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests. In word intrusion test BIBREF14 , top $k(=5)$ entities along a dimension are mixed with the bottom most entity (the intruder) in that dimension and shuffled. Then multiple (3 in our case) human annotators are asked to find out the intruder. We use majority voting to finalize one intruder. Amazon Mechanical Turk was used for crowdsourcing the task and we used 25 randomly selected dimensions for evaluation. For automated word intrusion BIBREF7 , we calculate following score for all $k+1$ entities
$$\text{AutoWI}(e_i) = \sum _{j=1, j\ne i}^{k+1}{p_{ij}}$$ (Eq. 18)
where $p_{ij}$ are the PMI scores. The entity having least score is identified as the intruder. We report the fraction of dimensions for which we were able to identify the intruder correctly.
As we can see in Table 1 , the proposed method achieves better values for $Coherence@5$ as a direct consequence of the regularization term, thereby maximizing coherence between appropriate entities. Performance on the word intrusion task also improves drastically as the intruder along each dimension is a lot easier to identify owing to the fact that the top entities for each dimension group together more conspicuously.
In this experiment, we test the model's ability to predict the best object entity for a given subject entity and relation. For each of the triples, we fix the subject and the relation and rank all entities (within same category as true object entity) based on their score according to Equation 10 . We report Mean Rank (MR) and Mean Reciprocal rank (MRR) of the true object entity and Hits@10 (the number of times true object entity is ranked in top 10) as percentage.
The objective of the coherence regularization term being tangential to that of the original loss function, is not expected to affect performance on the link prediction task. However, the results show a trivial drop of $1.2$ in MRR as the coherence term gives credibility to triples that are otherwise deemed incorrect by the closed world assumption.
We have used abbreviations for BS (Bachelor of Science), MS (Master of Science), UK (United Kingdom) and USA (United States of America). They appear as full form in the data.
In this experiment, we test the model on classifying correct and incorrect triples. For finding incorrect triples, we corrupt the object entity with a randomly selected entity within the same category. For classification, we use validation data to find the best threshold for each relation by training an SVM classifier and later use this threshold for classifying test triples. We report the mean accuracy and mean AUC over all relations.
We observe that the proposed method achieves slightly better performance for triple classification improving the accuracy by $4.4$ . The PMI information adds more evidence to the correct triples which are related in text data, generating a better threshold that more accurately distinguishes correct and incorrect triples.
Qualitative Analysis of Results
Since our aim is to induce interpretability in representations, in this section, we evaluate the embeddings learned by the baseline as well as the proposed method. For both methods, we select some dimensions randomly and present top 5 entities along those dimensions. The results are presented in Table 2 .
As we can see from the results, the proposed method produces more coherent entities than the baseline method.
Conclusion and Future Works
In this work, we proposed a method for inducing interpretability in KG embeddings using a coherence regularization term. We evaluated the proposed and the baseline method on the interpretability of the learned embeddings. We also evaluated the methods on different KG tasks and compared their performance. We found that the proposed method achieves better interpretability while maintaining comparable performance on KG tasks. As next steps, we plan to evaluate the generalizability of the method with more recent KG embeddings. | Performance was comparable, with the proposed method quite close and sometimes exceeding performance of baseline method. |
9ac923be6ada1ba2aa20ad62b0a3e593bb94e085 | 9ac923be6ada1ba2aa20ad62b0a3e593bb94e085_0 | Q: How do they evaluate interpretability?
Text: Introduction
Knowledge Graphs such as Freebase, WordNet etc. have become important resources for supporting many AI applications like web search, Q&A etc. They store a collection of facts in the form of a graph. The nodes in the graph represent real world entities such as Roger Federer, Tennis, United States etc while the edges represent relationships between them.
These KGs have grown huge, but they are still not complete BIBREF1 . Hence the task of inferring new facts becomes important. Many vector space models have been proposed which can perform reasoning over KGs efficiently BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF0 , BIBREF1 etc. These methods learn representations for entities and relations as vectors in a vector space, capturing global information about the KG. The task of KG inference is then defined as operations over these vectors. Some of these methods like BIBREF0 , BIBREF1 are capable of exploiting additional text data apart from the KG, resulting in better representations.
Although these methods have shown good performance in applications, they don't address the problem of understanding semantics of individual dimensions of the KG embedding. A recent work BIBREF6 addressed the problem of learning semantic features for KGs. However, they don't directly use vector space modeling.
In this work, we focus on incorporating interpretability in KG embeddings. Specifically, we aim to learn interpretable embeddings for KG entities by incorporating additional entity co-occurrence statistics from text data. This work is motivated by BIBREF7 who presented automated methods for evaluating topics learned via topic modelling methods. We adapt these measures for the vector space model and propose a method to directly maximize them while learning KG embedding. To the best of our knowledge, this work presents the first regularization term which induces interpretability in KG embeddings.
Related Work
Several methods have been proposed for learning KG embeddings. They differ on the modeling of entities and relations, usage of text data and interpretability of the learned embeddings. We summarize some of these methods in following sections.
Vector-space models for KG Embeddings
A very effective and powerful set of models are based on translation vectors. These models represent entities as vectors in $d$ -dimensional space, $\mathbb {R}^d$ and relations as translation vectors from head entity to tail entity, in either same or a projected space. TransE BIBREF2 is one of the initial works, which was later improved by many works [ BIBREF3 , BIBREF4 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ]. Also, there are methods which are able to incorporate text data while learning KG embeddings. BIBREF0 is one such method, which assumes a combined universal schema of relations from KG as well as text. BIBREF1 further improves the performance by sharing parameters among similar textual relations.
Interpretability of Embedding
While the vector space models perform well in many tasks, the semantics of learned representations are not directly clear. This problem for word embeddings was addressed by BIBREF12 where they proposed a set of constraints inducing interpretability. However, its adaptation for KG embeddings hasn't been addressed. A recent work BIBREF6 addressed a similar problem, where they learn coherent semantic features for entities and relations in KG. Our method differs from theirs in the following two aspects. Firstly, we use vector space modeling leading directly to KG embeddings while they need to infer KG embeddings from their probabilistic model. Second, we incorporate additional information about entities which helps in learning interpretable embeddings.
Proposed Method
We are interested in inducing interpretability in KG embeddings and regularization is one good way to do it. So we want to look at novel regularizers in KG embeddings. Hence, we explore a measure of coherence proposed in BIBREF7 . This measure allows automated evaluation of the quality of topics learned by topic modeling methods by using additional Point-wise Mutual Information (PMI) for word pairs. It was also shown to have high correlation with human evaluation of topics.
Based on this measure of coherence, we propose a regularization term. This term can be used with existing KG embedding methods (eg BIBREF0 ) for inducing interpretability. It is described in the following sections.
Coherence
In topic models, coherence of a topic can be determined by semantic relatedness among top entities within the topic. This idea can also be used in vector space models by treating dimensions of the vector space as topics. With this assumption, we can use a measure of coherence defined in following section for evaluating interpretability of the embeddings.
$Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it.
Coherence for top $k$ entities along dimension $l$ is defined as follows:
$$Coherence@k^{(l)} = \sum _{i=2}^{k}\sum _{j=1}^{i-1}{p_{ij}}$$ (Eq. 5)
where $p_{ij}$ is PMI score between entities $e_i$ and $e_j$ extracted from text data. $Coherence@k$ for the entity embedding matrix $\theta _e$ is defined as the average over all dimensions.
$$Coherence@k = \frac{1}{d} \sum _{l=1}^{d} Coherence@k^{(l)}$$ (Eq. 6)
We want to learn an embedding matrix $\theta _e$ which has high coherence (i.e. which maximizes $Coherence@k$ ). Since $\theta _e$ changes during training, the set of top $k$ entities along each dimension varies over iterations. Hence, directly maximizing $Coherence@k$ seems to be tricky.
An alternate approach could be to promote higher values for entity pairs having high PMI score $p_{ij}$ . This will result in an embedding matrix $\theta _e$ with a high value of $Coherence@k$ since high PMI entity pairs are more likely to be among top $k$ entities.
This idea can be captured by following coherence term
$$\mathcal {C}(\theta _e, P) = \sum _{i=2}^{n}\sum _{j=1}^{i-1} \left\Vert v(e_i)^\intercal v(e_j) - p_{ij} \right\Vert ^2$$ (Eq. 8)
where $P$ is entity-pair PMI matrix and $v(e)$ denote vector for entity $e$ . This term can be used in the objective function defined in Equation 13
Entity Model (Model-E)
We use the Entity Model proposed in BIBREF0 for learning KG embeddings. This model assumes a vector $v(e)$ for each entity and two vectors $v_s(r)$ and $v_o(r)$ for each relation of the KG. The score for the triple $(e_s, r, e_o)$ is given by,
$$f(e_s, r, e_o) = v(e_s)^\intercal v_s(r) + v(e_o)^\intercal v_o(r)$$ (Eq. 10)
Training these vectors requires incorrect triples. So, we use the closed world assumption. For each triple $t \in \mathcal {T}$ , we create two negative triples $t^-_o$ and $t^-_s$ by corrupting the object and subject of the triples respectively such that the corrupted triples don't appear in training, test or validation data. The loss for a triple pair is defined as $loss(t, t^-) = - \log (\sigma (f(t) - f(t^-)))$ . Then, the aggregate loss function is defined as
$$L(\theta _e, \theta _r, \mathcal {T}) = \frac{1}{|\mathcal {T}|}\sum _{t\in \mathcal {T}} \left(loss(t, t^-_o) + loss(t, t^-_s) \right)$$ (Eq. 11)
Objective
The overall loss function can be written as follows:
$$L(\theta _e, \theta _r, \mathcal {T}) + \lambda _c \mathcal {C}(\theta _e, P) + \lambda _r \mathcal {R}(\theta _e, \theta _r)$$ (Eq. 13)
Where $\mathcal {R}(\theta _e, \theta _r) = \frac{1}{2}\left(\left\Vert \theta _e\right\Vert ^2+\left\Vert \theta _r\right\Vert ^2\right)$ is the $L2$ regularization term and $\lambda _c$ and $\lambda _r$ are hyper-parameters controlling the trade-off among different terms in the objective function.
Datasets
We use the FB15k-237 BIBREF13 dataset for experiments. It contains 14541 entities and 237 relations. The triples are split into training, validation and test set having 272115, 17535 and 20466 triples respectively. For extracting entity co-occurrences, we use the textual relations used in BIBREF1 . It contains around 3.7 millions textual triples, which we use for calculating PMI for entity pairs.
Experimental Setup
We use the method proposed in BIBREF0 as the baseline. Please refer to Section "Entity Model (Model-E)" for more details. For evaluating the learned embeddings, we test them on different tasks. All the hyper-parameters are tuned using performance (MRR) on validation data. We use 100 dimensions after cross validating among 50, 100 and 200 dimensions. For regularization, we use $\lambda _r = 0.01$ (from $10,1,0.1,0.01$ ) and $\lambda _c = 0.01$ (from $10,1,0.1,0.01$ ) for $L2$ and coherence regularization respectively. We use multiple random initializations sampled from a Gaussian distribution. For optimization, we use gradient descent and stop optimization when gradient becomes 0 upto 3 decimal places. The final performance measures are reported for test data.
Results
In following sections, we compare the performance of the proposed method with the baseline method in different tasks. Please refer to Table 1 for results.
For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests. In word intrusion test BIBREF14 , top $k(=5)$ entities along a dimension are mixed with the bottom most entity (the intruder) in that dimension and shuffled. Then multiple (3 in our case) human annotators are asked to find out the intruder. We use majority voting to finalize one intruder. Amazon Mechanical Turk was used for crowdsourcing the task and we used 25 randomly selected dimensions for evaluation. For automated word intrusion BIBREF7 , we calculate following score for all $k+1$ entities
$$\text{AutoWI}(e_i) = \sum _{j=1, j\ne i}^{k+1}{p_{ij}}$$ (Eq. 18)
where $p_{ij}$ are the PMI scores. The entity having least score is identified as the intruder. We report the fraction of dimensions for which we were able to identify the intruder correctly.
As we can see in Table 1 , the proposed method achieves better values for $Coherence@5$ as a direct consequence of the regularization term, thereby maximizing coherence between appropriate entities. Performance on the word intrusion task also improves drastically as the intruder along each dimension is a lot easier to identify owing to the fact that the top entities for each dimension group together more conspicuously.
In this experiment, we test the model's ability to predict the best object entity for a given subject entity and relation. For each of the triples, we fix the subject and the relation and rank all entities (within same category as true object entity) based on their score according to Equation 10 . We report Mean Rank (MR) and Mean Reciprocal rank (MRR) of the true object entity and Hits@10 (the number of times true object entity is ranked in top 10) as percentage.
The objective of the coherence regularization term being tangential to that of the original loss function, is not expected to affect performance on the link prediction task. However, the results show a trivial drop of $1.2$ in MRR as the coherence term gives credibility to triples that are otherwise deemed incorrect by the closed world assumption.
We have used abbreviations for BS (Bachelor of Science), MS (Master of Science), UK (United Kingdom) and USA (United States of America). They appear as full form in the data.
In this experiment, we test the model on classifying correct and incorrect triples. For finding incorrect triples, we corrupt the object entity with a randomly selected entity within the same category. For classification, we use validation data to find the best threshold for each relation by training an SVM classifier and later use this threshold for classifying test triples. We report the mean accuracy and mean AUC over all relations.
We observe that the proposed method achieves slightly better performance for triple classification improving the accuracy by $4.4$ . The PMI information adds more evidence to the correct triples which are related in text data, generating a better threshold that more accurately distinguishes correct and incorrect triples.
Qualitative Analysis of Results
Since our aim is to induce interpretability in representations, in this section, we evaluate the embeddings learned by the baseline as well as the proposed method. For both methods, we select some dimensions randomly and present top 5 entities along those dimensions. The results are presented in Table 2 .
As we can see from the results, the proposed method produces more coherent entities than the baseline method.
Conclusion and Future Works
In this work, we proposed a method for inducing interpretability in KG embeddings using a coherence regularization term. We evaluated the proposed and the baseline method on the interpretability of the learned embeddings. We also evaluated the methods on different KG tasks and compared their performance. We found that the proposed method achieves better interpretability while maintaining comparable performance on KG tasks. As next steps, we plan to evaluate the generalizability of the method with more recent KG embeddings. | For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests. |
3b995a7358cefb271b986e8fc6efe807f25d60dc | 3b995a7358cefb271b986e8fc6efe807f25d60dc_0 | Q: What types of word representations are they evaluating?
Text: Introduction
Commonsense reasoning is fundamental for natural language agents to generalize inference beyond their training corpora. Although the natural language inference (NLI) task BIBREF0 , BIBREF1 has proved a good pre-training objective for sentence representations BIBREF2 , commonsense coverage is limited and most models are still end-to-end, relying heavily on word representations to provide background world knowledge.
Therefore, we propose modeling commonsense knowledge down to word-level analogical reasoning. In this sense, existing analogy benchmarks are lackluster. For Chinese analogy (CA), the simplified Chinese dataset CA8 BIBREF3 and the traditional Chinese dataset CA-Google BIBREF4 translated from the English BIBREF5 contain only a few dozen relations, most of which are either morphological, e.g., a shared prefix, or about named entities, e.g., capital-country.
However, commonsense knowledge bases such as WordNet BIBREF6 and ConceptNet BIBREF7 have long annotated relations in our lexicon. Among them, E-HowNet BIBREF4 , extended from HowNet BIBREF8 , currently annotates 88K traditional Chinese words with their structured definitions and English translations.
In this paper, we propose an algorithm for the extraction of accurate commonsense analogies from E-HowNet. We present CA-EHN, the first commonsense analogy dataset containing 85,226 analogies covering 5,563 words and 6,490 commonsense relations.
E-HowNet
E-HowNet 2.0 consists of two major parts: A lexicon of words and concepts with multi-layered annotations, and a taxonomy of concepts with attached word senses.
Lexicon
The E-HowNet lexicon consists of two types of tokens: 88K words and 4K concepts. Words and concepts are distinguished by whether there is a vertical bar and an English string in the token. For example, UTF8bkai人 (person) and UTF8bkai雞 (chicken) are words, and human $\vert $ UTF8bkai人 and UTF8bkai雞 $\vert $ chicken are concepts. In this work, the order of English and Chinese within a concept does not matter. In addition, E-HowNet also contains dozens of relations, which comes fully in English, e.g., or, theme, telic.
Words and concepts in E-HowNet are annotated with one or more structured definitions consisting of concepts and relations. Table 1 provides several examples with gradually increasing complexity: UTF8bkai人 (person) is defined simply as a human $\vert $ UTF8bkai人; UTF8bkai駿馬 $\vert $ ExcellentSteed is defined as a UTF8bkai馬 $\vert $ horse which has a qualification relation with HighQuality $\vert $ UTF8bkai優質; UTF8bkai實驗室 (laboratory) is defined as a InstitutePlace $\vert $ UTF8bkai場所 used for conducting experiments or research. Each concept has only one definition, but a word may have multiple senses and hence multiple definitions. In this work, we use E-HowNet word sense definitions to extract commonsense analogies (Section "Commonsense Analogy" ). In addition, word senses are annotated with their English translations, which could be used to transfer our extracted analogies to English multi-word expressions (MWE).
Taxonomy
Concepts in E-HowNet are additionally organized into a taxonomy. Figure 1 shows the partially expanded tree. Each word sense in the E-HowNet lexicon is attached to a taxon in the tree. In this work, we show that infusing E-HowNet taxonomy tree into word embeddings boosts performance across benchmarks (Section "Commonsense Infusing" ).
Commonsense Analogy
We extract commonsense word analogies with a rich coverage of words and relations by comparing word sense definitions. The extraction algorithm is further refined with multiple filters, including putting the linguist in the loop.
Analogical Word Pairs
Illustrated in Figure 2 , the extraction algorithm before refinement consists of five steps.
Definition concept expansion. As many words are synonymous with some concepts, many word senses are defined trivially by one concept. For example, the definition of UTF8bkai駿馬 (excellent steed) is simply {UTF8bkai駿馬 $\vert $ ExcellentSteed}. The triviality is resolved by expanding such definitions by one layer, e.g., replacing {UTF8bkai駿馬 $\vert $ ExcellentSteed} with {UTF8bkai馬 $\vert $ horse:qualification={HighQuality $\vert $ UTF8bkai優質}}, i.e., the definition of UTF8bkai駿馬 $\vert $ ExcellentSteed.
Definition string parsing. We parse each definition into a directed graph. Each node in the graph is either a word, a concept, or a function relation, e.g., or() at the bottom of Table 1 . Each edge is either an attribute relation edge, e.g., :telic=, or a dummy argument edge connecting a function node with one of its argument nodes.
Definition graph comparison. For every sense pair of two different words in the E-HowNet lexicon, we determine if their definition graphs differ only in one concept node. If they do, the two (word, concept) pairs are analogical to one another. For example, since the graph of UTF8bkai良材 sense#2 (the good timber) and the expanded graph of UTF8bkai駿馬 sense#1 (an excellent steed) differs only in wood $\vert $ UTF8bkai木 and UTF8bkai馬 $\vert $ horse, we extract the following concept analogy: UTF8bkai良材:wood $\vert $ UTF8bkai木=UTF8bkai駿馬:UTF8bkai馬 $\vert $ horse.
Left concept expansion. For each concept analogy, we expand the left concept to those words that have one sense defined trivially by it. For example, there is only one word UTF8bkai木頭 (wood) defined as {wood $\vert $ UTF8bkai木}. Thus after expansion, there is still only one analogy: UTF8bkai良材:UTF8bkai木頭=UTF8bkai駿馬:UTF8bkai馬 $\vert $ horse. Most of the time, this step yields multiple analogies per concept analogy.
Right concept expansion. Finally, the remaining concept in each analogy is again expanded to the list of words with a sense trivially defined by it. However, this time we do not use them to form multiple analogies. Instead, the word list is kept as a synset. For example, as UTF8bkai山馬 (orohippus), UTF8bkai馬 (horse), UTF8bkai馬匹 (horses), UTF8bkai駙 (side horse) all have one sense defined as {UTF8bkai馬 $\vert $ horse}, the final analogy becomes UTF8bkai良材:UTF8bkai木頭=UTF8bkai駿馬:{UTF8bkai山馬,UTF8bkai馬,UTF8bkai馬匹,UTF8bkai駙}. When evaluating embeddings on our benchmark, we consider it a correct prediction as long as it belongs to the synset.
Accurate Analogy
As the core procedure yields an excessively large benchmark, added to the fact that E-HowNet word sense definitions are sometimes inaccurate, we made several refinements to the extraction process.
Concrete concepts. As we found that E-HowNet tends to provide more accurate definitions for more concrete concepts, we require words and concepts at every step of the process to be under physical $\vert $ UTF8bkai物質, which is one layer below thing $\vert $ UTF8bkai萬物 in Figure 1 . This restriction shrinks the benchmark by half.
Common words. At every step of the process, we require words to occur at least five times in ASBC 4.0 BIBREF9 , a segmented traditional Chinese corpus containing 10M words from articles between 1981 and 2007. This eliminates uncommon, ancient words or words with synonymous but uncommon, ancient characters. As shown in Table 3 , the benchmark size is significantly reduced by this restriction.
Linguist checking. We added two data checks into the extraction process between definition graph comparison and left concept expansion. As shown in Table 3 , each of the 36,100 concept analogies were checked by a linguist, leaving 24,439 accurate ones. Furthermore, each synset needed in the 24,439 concept analogies was checked again to remove words that are not actually synonymous with the defining concept. For example, UTF8bkai花草, UTF8bkai山茶花, UTF8bkai薰衣草, UTF8bkai鳶尾花 are all common words with a sense defined trivially as {FlowerGrass $\vert $ UTF8bkai花草}. However, the last three (camellia, lavender, iris) are not actually synonyms but hyponyms to the concept. This step also helps eliminate words in a synset that are using their rare senses, as we do not expect embeddings to encode those senses without word sense disambiguation (WSD). After the second-pass linguist check, we arrived at 85,226 accurate analogies.
Analogy Datasets
Table 3 compares Chinese word analogy datasets. Most analogies in existing datasets are morphological (morph.) or named entity (entity) relations. For example, CA8-Morphological BIBREF3 uses 21 shared prefix characters, e.g., UTF8bkai第, to form 2,553 analogies, e.g., UTF8bkai一 : UTF8bkai第一 = UTF8bkai二 : UTF8bkai第二 (one : first = two : second). As for named entities, some 20 word pairs of the capital-country relation can be permuted to form 190 analogies, which require a knowledge base but not commonsense to solve. Only the nature part of CA8 and the man-woman part of CA-Google BIBREF10 contains a handful of relations that requires commonsense world knowledge. In constrast, CA-EHN extracts 85K linguist-checked analogies covering 6,490 concept pairs, e.g., (wood $\vert $ UTF8bkai木, UTF8bkai馬 $\vert $ horse). Table 2 shows a small list of the data, covering such diverse domains as onomatopoeia, disability, kinship, and zoology. Full CA-EHN is available in the supplementary materials.
Word Embeddings
We trained word embeddings using either GloVe BIBREF11 or SGNS BIBREF12 on a small or a large corpus. The small corpus consists of the traditional Chinese part of Chinese Gigaword BIBREF13 and ASBC 4.0 BIBREF9 . The large corpus additionally includes the Chinese part of Wikipedia.
Table 4 shows embedding performance across analogy benchmarks. Cov denotes the number of analogies of which the first three words exist in the embedding. Analogies that are not covered are excluded from that evaluation. Still, we observe that the larger corpus yields higher accuracy across all benchmarks. In addition, using SGNS instead of GloVe provides universal boosts in performance.
While performance on CA-EHN correlates well to that on other benchmarks, commonsense analogies prove to be much more difficult than morphological or named entity analogies for distributed word representations.
Commonsense Infusing
E-HowNet comes in two major parts: a lexicon and a taxonomy. For the lexicon, we have used it to extract the CA-EHN commonsense analogies. For the taxonomy, we experiment infusing its hypo-hyper and same-taxon relations to distributed word representations by retrofitting BIBREF14 . For example, in Figure 1 , the word vector of UTF8bkai空間 is optimized to be close to both its distributed representation and the word vectors of UTF8bkai空隙 (same-taxon) and UTF8bkai事物 (hypo-hyper). Table 4 shows that retrofitting embeddings with E-HowNet taxonomy improves performance on most benchmarks, and all three embeddings have doubled accuracies on CA-EHN. This shows that CA-EHN is a great indicator of how well word representations embed commonsense knowledge.
Conclusion
We have presented CA-EHN, the first commonsense word analogy dataset, by leveraging word sense definitions in E-HowNet. After linguist checking, we have 85,226 Chinese analogies covering 5,563 words and 6,490 commonsense relations. We anticipate that CA-EHN will become an important benchmark testing how well future embedding methods capture commonsense knowledge, which is crucial for models to generalize inference beyond their training corpora. With translations provided by E-HowNet, Chinese words in CA-EHN can be transferred to English MWEs. | GloVE; SGNS |
2210178facc0e7b3b6341eec665f3c098abef5ac | 2210178facc0e7b3b6341eec665f3c098abef5ac_0 | Q: What type of recurrent layers does the model use?
Text: Introduction
Spoken dialog systems (SDSs) allow users to naturally interact with machines through speech and are nowadays an important research direction, especially with the great success of automatic speech recognition (ASR) systems BIBREF0 , BIBREF1 . SDSs can be designed for generic purposes, e.g. smalltalk BIBREF2 , BIBREF3 ) or a specific task such as finding restaurants or booking flights BIBREF4 , BIBREF5 . Here, we focus on task-oriented dialog systems, which assist the users to reach a certain goal.
Task-oriented dialog systems are often implemented in a modular architecture to break up the complex task of conducting dialogs into more manageable subtasks. BIBREF6 describe the following prototypical set-up of such a modular architecture: First, an ASR system converts the spoken user utterance into text. Then, a spoken language understanding (SLU) module extracts the user's intent and coarse-grained semantic information. Next, a dialog state tracking (DST) component maintains a distribution over the state of the dialog, updating it in every turn. Given this information, the dialog policy manager decides on the next action of the system. Finally, a natural language generation (NLG) module forms the system reply that is converted into an audio signal via a text-to-speech synthesizer.
Error propagation poses a major problem in modular architectures as later components depend on the output of the previous steps. We show in this paper that DST suffers from ASR errors, which was also noted by BIBREF7 . One solution is to avoid modularity and instead perform joint reasoning over several subtasks, e.g. many DST systems directly operate on ASR output and do not rely on a separate SLU module BIBREF8 , BIBREF7 , BIBREF9 . End-to-end systems that can be directly trained on dialogs without intermediate annotations have been proposed for open-domain dialog systems BIBREF3 . This is more difficult to realize for task-oriented systems as they often require domain knowledge and external databases. First steps into this direction were taken by BIBREF5 and BIBREF10 , yet these approaches do not integrate ASR into the joint reasoning process.
We take a first step towards integrating ASR in an end-to-end SDS by passing on a richer hypothesis space to subsequent components. Specifically, we investigate how the richer ASR hypothesis space can improve DST. We focus on these two components because they are at the beginning of the processing pipeline and provide vital information for the subsequent SDS components. Typically, ASR systems output the best hypothesis or an n-best list, which the majority of DST approaches so far uses BIBREF11 , BIBREF8 , BIBREF7 , BIBREF12 . However, n-best lists can only represent a very limited amount of hypotheses. Internally, the ASR system maintains a rich hypothesis space in the form of speech lattices or confusion networks (cnets).
We adapt recently proposed algorithms to encode lattices with recurrent neural networks (RNNs) BIBREF14 , BIBREF15 to encode cnets via an RNN based on Gated Recurrent Units (GRUs) to perform DST in a neural encoder-classifier system and show that this outperforms encoding only the best ASR hypothesis. We are aware of two DST approaches that incorporate posterior word-probabilities from cnets in addition to features derived from the n-best lists BIBREF11 , BIBREF16 , but to the best of our knowledge, we develop the first DST system directly operating on cnets.
Proposed Model
Our model depicted in Figure FIGREF3 is based on an incremental DST system BIBREF12 . It consists of an embedding layer for the words in the system and user utterances, followed by a fully connected layer composed of Rectified Linear Units (ReLUs) BIBREF17 , which yields the input to a recurrent layer to encode the system and user outputs in each turn with a softmax classifier on top. INLINEFORM0 denotes a weighted sum INLINEFORM1 of the system dialog act INLINEFORM2 and the user utterance INLINEFORM3 , where INLINEFORM4 , and INLINEFORM5 are learned parameters: DISPLAYFORM0
Independent experiments with the 1-best ASR output showed that a weighted sum of the system and user vector outperformed taking only the user vector INLINEFORM0 as in the original model of BIBREF12 . We chose this architecture over other successful DST approaches that operate on the turn-level of the dialogs BIBREF8 , BIBREF7 because it processes the system and user utterances word-by-word, which makes it easy to replace the recurrent layer of the original version with the cnet encoder.
Our cnet encoder is inspired from two recently proposed algorithms to encode lattices with an RNN with standard memory BIBREF14 and a GRU-based RNN BIBREF15 . In contrast to lattices, every cnet state has only one predecessor and groups together the alternative word hypotheses of a fixed time interval (timestep). Therefore, our cnet encoder is conceptually simpler and easier to implement than the lattice encoders: The recurrent memory only needs to retain the hidden state of the previous timestep, while in the lattice encoder the hidden states of all previously processed lattice states must be kept in memory throughout the encoding process. Following BIBREF15 , we use GRUs as they provide an extended memory compared to plain RNNs. The cnet encoder reads in one timestep at a time as depicted in Figure FIGREF4 . The key idea is to separately process each of the INLINEFORM0 word hypotheses representations INLINEFORM1 in a timestep with the standard GRU to obtain INLINEFORM2 hidden states INLINEFORM3 as defined in Equation ( EQREF7 )-() where INLINEFORM5 , and INLINEFORM6 are the learned parameters of the GRU update, candidate activation and reset gate. To get the hidden state INLINEFORM7 of the timestep, the hypothesis-specific hidden states INLINEFORM8 are combined by a pooling function (Equation ). DISPLAYFORM0
We experiment with the two different pooling functions INLINEFORM0 for the INLINEFORM1 hidden GRU states INLINEFORM2 of the alternative word hypotheses that were used by BIBREF14 :
Instead of the system output in sentence form we use the dialog act representations in the form of INLINEFORM0 dialog-act, slot, value INLINEFORM1 triples, e.g. `inform food Thai', which contain the same information in a more compact way. Following BIBREF7 , we initialize the word embeddings with 300-dimensional semantically specialized PARAGRAM-SL999 embeddings BIBREF21 . The hyper-parameters for our model are listed in the appendix.
The cnet GRU subsumes a standard GRU-based RNN if each token in the input is represented as a timestep with a single hypothesis. We adopt this method for the system dialog acts and the baseline model that encode only the best ASR hypothesis.
Data
In our experiments, we use the dataset provided for the second Dialog State Tracking Challenge (DSTC2) BIBREF22 that consists of user interactions with an SDS in the restaurant domain. It encompasses 1612, 506, 1117 dialogs for training, development and testing, respectively. Every dialog turn is annotated with its dialog state encompassing the three goals for area (7 values), food (93 values) and price range (5 values) and 8 requestable slots, e.g. phone and address. We train on the manual transcripts and the cnets provided with the dataset and evaluate on the cnets.
Some system dialog acts in the DSTC2 dataset do not correspond to words and thus were not included in the pretrained word embeddings. Therefore, we manually constructed a mapping of dialog acts to words contained in the embeddings, where necessary, e.g. we mapped expl-conf to explicit confirm.
In order to estimate the potential of improving DST by cnets, we investigated the coverage of words from the manual transcripts for different ASR output types. As shown in Table TABREF10 , cnets improve the coverage of words from the transcripts by more than 15 percentage points over the best hypothesis and more than five percentage points over the 10-best hypotheses.
However, the cnets provided with the DSTC2 dataset are quite large. The average cnet consists of 23 timesteps with 5.5 hypotheses each, amounting to about 125 tokens, while the average best hypothesis contains four tokens. Manual inspection of the cnets revealed that they contain a lot of noise such as interjections (uh, oh, ...) that never appear in the 10-best lists. The appendix provides an exemplary cnet for illustration. To reduce the processing time and amount of noisy hypotheses, we remove all interjections and additionally experiment with pruning hypotheses with a score below a certain threshold. As shown in Table TABREF10 , this does not discard too many correct hypotheses but markedly reduces the size of the cnet to an average of seven timesteps with two hypotheses.
Results and Discussion
We report the joint goals and requests accuracy (all goals or requests are correct in a turn) according to the DSTC2 featured metric BIBREF22 . We train each configuration 10 times with different random seeds and report the average, minimum and maximum accuracy. To study the impact of ASR errors on DST, we trained and evaluated our model on the different user utterance representations provided in the DSTC2 dataset. Our baseline model uses the best hypothesis of the batch ASR system, which has a word error rate (WER) of 34% on the DSTC2 test set. Most DST approaches use the hypotheses of the live ASR system, which has a lower WER of 29%. We train our baseline on the batch ASR outputs as the cnets were also produced by this system. As can be seen from Table TABREF11 , the DST accuracy slightly increases for the higher-quality live ASR outputs. More importantly, the DST performance drastically increases, when we evaluate on the manual transcripts that reflect the true user utterances nearly perfectly.
Results of the Model with Cnet Encoder
Table TABREF13 displays the results for our model evaluated on cnets for increasingly aggressive pruning levels (discarding only interjections, additionally discarding hypotheses with scores below 0.001 and 0.01, respectively). As can be seen, using the full cnet except for interjections does not improve over the baseline. We believe that the share of noisy hypotheses in the DSTC2 cnets is too high for our model to be able to concentrate on the correct hypotheses. However, when pruning low-probability hypotheses both pooling strategies improve over the baseline. Yet, average pooling performs worse for the lower pruning threshold, which shows that the model is still affected by noise among the hypotheses. Conversely, the model can exploit a rich but noisy hypothesis space by weighting the information retained from each hypothesis: Weighted pooling performs better for the lower pruning threshold of 0.001 with which we obtain the highest result overall, improving the joint goals accuracy by 1.6 percentage points compared to the baseline. Therefore, we conclude that is beneficial to use information from all alternatives and not just the highest scoring one but that it is necessary to incorporate the scores of the hypotheses and to prune low-probability hypotheses. Moreover, we see that an ensemble model that averages the predictions of ten cnet models trained with different random seeds also outperforms an ensemble of ten baseline models.
Although it would be interesting to compare the performance of cnets to full lattices, this is not possible with the original DSTC2 data as there were no lattices provided. This could be investigated in further experiments by running a new ASR system on the DSTC2 dataset to obtain both lattices and cnets. However, these results will not be comparable to previous results on this dataset due to the different ASR output.
Comparison to the State of the Art
The current state of the art on the DSTC2 dataset in terms of joint goals accuracy is an ensemble of neural models based on hand-crafted update rules and RNNs BIBREF16 . Besides, this model uses a delexicalization mechanism that replaces mentions of slots or values from the DSTC2 ontology by a placeholder to learn value-independent patterns BIBREF8 , BIBREF23 . While this approach is suitable for small domains and languages with a simple morphology such as English, it becomes increasingly difficult to locate words or phrases corresponding to slots or values in wider domains or languages with a rich morphology BIBREF7 and we therefore abstained from delexicalization.
The best result for the joint requests was obtained by a ranking model based on hand-crafted features, which relies on separate SLU systems besides ASR BIBREF11 . SLU is often cast as sequence labeling problem, where each word in the utterance is annotated with its role in respect to the user's intent BIBREF24 , BIBREF25 , requiring training data with fine-grained word-level annotations in contrast to the turn-level dialog state annotations. Furthermore, a separate SLU component introduces an additional set of parameters to the SDS that has to be learned. Therefore, it has been argued to jointly perform SLU and DST in a single system BIBREF8 , which we follow in this work.
As a more comparable reference for our set-up, we provide the result of the neural DST system of BIBREF7 that like our approach does not use outputs of a separate SLU system nor delexicalized features. Our ensemble models outperform BIBREF7 for the joint requests but are a bit worse for the joint goals. We stress that our goal was not to reach for the state of the art but show that DST can benefit from encoding cnets.
Conclusion
As we show in this paper, ASR errors pose a major obstacle to accurate DST in SDSs. To reduce the error propagation, we suggest to exploit the rich ASR hypothesis space encoded in cnets that contain more correct hypotheses than conventionally used n-best lists. We develop a novel method to encode cnets via a GRU-based RNN and demonstrate that this leads to improved DST performance compared to encoding the best ASR hypothesis on the DSTC2 dataset.
In future experiments, we would like to explore further ways to leverage the scores of the hypotheses, for example by incorporating them as an independent feature rather than a direct weight in the model.
Acknowledgments
We thank our anonymous reviewers for their helpful feedback. Our work has been supported by the German Research Foundation (DFG) via a research grant to the project A8 within the Collaborative Research Center (SFB) 732 at the University of Stuttgart.
A. Hyper-Parameters | GRU |
7cf726db952c12b1534cd6c29d8e7dfa78215f9e | 7cf726db952c12b1534cd6c29d8e7dfa78215f9e_0 | Q: What is a word confusion network?
Text: Introduction
Spoken dialog systems (SDSs) allow users to naturally interact with machines through speech and are nowadays an important research direction, especially with the great success of automatic speech recognition (ASR) systems BIBREF0 , BIBREF1 . SDSs can be designed for generic purposes, e.g. smalltalk BIBREF2 , BIBREF3 ) or a specific task such as finding restaurants or booking flights BIBREF4 , BIBREF5 . Here, we focus on task-oriented dialog systems, which assist the users to reach a certain goal.
Task-oriented dialog systems are often implemented in a modular architecture to break up the complex task of conducting dialogs into more manageable subtasks. BIBREF6 describe the following prototypical set-up of such a modular architecture: First, an ASR system converts the spoken user utterance into text. Then, a spoken language understanding (SLU) module extracts the user's intent and coarse-grained semantic information. Next, a dialog state tracking (DST) component maintains a distribution over the state of the dialog, updating it in every turn. Given this information, the dialog policy manager decides on the next action of the system. Finally, a natural language generation (NLG) module forms the system reply that is converted into an audio signal via a text-to-speech synthesizer.
Error propagation poses a major problem in modular architectures as later components depend on the output of the previous steps. We show in this paper that DST suffers from ASR errors, which was also noted by BIBREF7 . One solution is to avoid modularity and instead perform joint reasoning over several subtasks, e.g. many DST systems directly operate on ASR output and do not rely on a separate SLU module BIBREF8 , BIBREF7 , BIBREF9 . End-to-end systems that can be directly trained on dialogs without intermediate annotations have been proposed for open-domain dialog systems BIBREF3 . This is more difficult to realize for task-oriented systems as they often require domain knowledge and external databases. First steps into this direction were taken by BIBREF5 and BIBREF10 , yet these approaches do not integrate ASR into the joint reasoning process.
We take a first step towards integrating ASR in an end-to-end SDS by passing on a richer hypothesis space to subsequent components. Specifically, we investigate how the richer ASR hypothesis space can improve DST. We focus on these two components because they are at the beginning of the processing pipeline and provide vital information for the subsequent SDS components. Typically, ASR systems output the best hypothesis or an n-best list, which the majority of DST approaches so far uses BIBREF11 , BIBREF8 , BIBREF7 , BIBREF12 . However, n-best lists can only represent a very limited amount of hypotheses. Internally, the ASR system maintains a rich hypothesis space in the form of speech lattices or confusion networks (cnets).
We adapt recently proposed algorithms to encode lattices with recurrent neural networks (RNNs) BIBREF14 , BIBREF15 to encode cnets via an RNN based on Gated Recurrent Units (GRUs) to perform DST in a neural encoder-classifier system and show that this outperforms encoding only the best ASR hypothesis. We are aware of two DST approaches that incorporate posterior word-probabilities from cnets in addition to features derived from the n-best lists BIBREF11 , BIBREF16 , but to the best of our knowledge, we develop the first DST system directly operating on cnets.
Proposed Model
Our model depicted in Figure FIGREF3 is based on an incremental DST system BIBREF12 . It consists of an embedding layer for the words in the system and user utterances, followed by a fully connected layer composed of Rectified Linear Units (ReLUs) BIBREF17 , which yields the input to a recurrent layer to encode the system and user outputs in each turn with a softmax classifier on top. INLINEFORM0 denotes a weighted sum INLINEFORM1 of the system dialog act INLINEFORM2 and the user utterance INLINEFORM3 , where INLINEFORM4 , and INLINEFORM5 are learned parameters: DISPLAYFORM0
Independent experiments with the 1-best ASR output showed that a weighted sum of the system and user vector outperformed taking only the user vector INLINEFORM0 as in the original model of BIBREF12 . We chose this architecture over other successful DST approaches that operate on the turn-level of the dialogs BIBREF8 , BIBREF7 because it processes the system and user utterances word-by-word, which makes it easy to replace the recurrent layer of the original version with the cnet encoder.
Our cnet encoder is inspired from two recently proposed algorithms to encode lattices with an RNN with standard memory BIBREF14 and a GRU-based RNN BIBREF15 . In contrast to lattices, every cnet state has only one predecessor and groups together the alternative word hypotheses of a fixed time interval (timestep). Therefore, our cnet encoder is conceptually simpler and easier to implement than the lattice encoders: The recurrent memory only needs to retain the hidden state of the previous timestep, while in the lattice encoder the hidden states of all previously processed lattice states must be kept in memory throughout the encoding process. Following BIBREF15 , we use GRUs as they provide an extended memory compared to plain RNNs. The cnet encoder reads in one timestep at a time as depicted in Figure FIGREF4 . The key idea is to separately process each of the INLINEFORM0 word hypotheses representations INLINEFORM1 in a timestep with the standard GRU to obtain INLINEFORM2 hidden states INLINEFORM3 as defined in Equation ( EQREF7 )-() where INLINEFORM5 , and INLINEFORM6 are the learned parameters of the GRU update, candidate activation and reset gate. To get the hidden state INLINEFORM7 of the timestep, the hypothesis-specific hidden states INLINEFORM8 are combined by a pooling function (Equation ). DISPLAYFORM0
We experiment with the two different pooling functions INLINEFORM0 for the INLINEFORM1 hidden GRU states INLINEFORM2 of the alternative word hypotheses that were used by BIBREF14 :
Instead of the system output in sentence form we use the dialog act representations in the form of INLINEFORM0 dialog-act, slot, value INLINEFORM1 triples, e.g. `inform food Thai', which contain the same information in a more compact way. Following BIBREF7 , we initialize the word embeddings with 300-dimensional semantically specialized PARAGRAM-SL999 embeddings BIBREF21 . The hyper-parameters for our model are listed in the appendix.
The cnet GRU subsumes a standard GRU-based RNN if each token in the input is represented as a timestep with a single hypothesis. We adopt this method for the system dialog acts and the baseline model that encode only the best ASR hypothesis.
Data
In our experiments, we use the dataset provided for the second Dialog State Tracking Challenge (DSTC2) BIBREF22 that consists of user interactions with an SDS in the restaurant domain. It encompasses 1612, 506, 1117 dialogs for training, development and testing, respectively. Every dialog turn is annotated with its dialog state encompassing the three goals for area (7 values), food (93 values) and price range (5 values) and 8 requestable slots, e.g. phone and address. We train on the manual transcripts and the cnets provided with the dataset and evaluate on the cnets.
Some system dialog acts in the DSTC2 dataset do not correspond to words and thus were not included in the pretrained word embeddings. Therefore, we manually constructed a mapping of dialog acts to words contained in the embeddings, where necessary, e.g. we mapped expl-conf to explicit confirm.
In order to estimate the potential of improving DST by cnets, we investigated the coverage of words from the manual transcripts for different ASR output types. As shown in Table TABREF10 , cnets improve the coverage of words from the transcripts by more than 15 percentage points over the best hypothesis and more than five percentage points over the 10-best hypotheses.
However, the cnets provided with the DSTC2 dataset are quite large. The average cnet consists of 23 timesteps with 5.5 hypotheses each, amounting to about 125 tokens, while the average best hypothesis contains four tokens. Manual inspection of the cnets revealed that they contain a lot of noise such as interjections (uh, oh, ...) that never appear in the 10-best lists. The appendix provides an exemplary cnet for illustration. To reduce the processing time and amount of noisy hypotheses, we remove all interjections and additionally experiment with pruning hypotheses with a score below a certain threshold. As shown in Table TABREF10 , this does not discard too many correct hypotheses but markedly reduces the size of the cnet to an average of seven timesteps with two hypotheses.
Results and Discussion
We report the joint goals and requests accuracy (all goals or requests are correct in a turn) according to the DSTC2 featured metric BIBREF22 . We train each configuration 10 times with different random seeds and report the average, minimum and maximum accuracy. To study the impact of ASR errors on DST, we trained and evaluated our model on the different user utterance representations provided in the DSTC2 dataset. Our baseline model uses the best hypothesis of the batch ASR system, which has a word error rate (WER) of 34% on the DSTC2 test set. Most DST approaches use the hypotheses of the live ASR system, which has a lower WER of 29%. We train our baseline on the batch ASR outputs as the cnets were also produced by this system. As can be seen from Table TABREF11 , the DST accuracy slightly increases for the higher-quality live ASR outputs. More importantly, the DST performance drastically increases, when we evaluate on the manual transcripts that reflect the true user utterances nearly perfectly.
Results of the Model with Cnet Encoder
Table TABREF13 displays the results for our model evaluated on cnets for increasingly aggressive pruning levels (discarding only interjections, additionally discarding hypotheses with scores below 0.001 and 0.01, respectively). As can be seen, using the full cnet except for interjections does not improve over the baseline. We believe that the share of noisy hypotheses in the DSTC2 cnets is too high for our model to be able to concentrate on the correct hypotheses. However, when pruning low-probability hypotheses both pooling strategies improve over the baseline. Yet, average pooling performs worse for the lower pruning threshold, which shows that the model is still affected by noise among the hypotheses. Conversely, the model can exploit a rich but noisy hypothesis space by weighting the information retained from each hypothesis: Weighted pooling performs better for the lower pruning threshold of 0.001 with which we obtain the highest result overall, improving the joint goals accuracy by 1.6 percentage points compared to the baseline. Therefore, we conclude that is beneficial to use information from all alternatives and not just the highest scoring one but that it is necessary to incorporate the scores of the hypotheses and to prune low-probability hypotheses. Moreover, we see that an ensemble model that averages the predictions of ten cnet models trained with different random seeds also outperforms an ensemble of ten baseline models.
Although it would be interesting to compare the performance of cnets to full lattices, this is not possible with the original DSTC2 data as there were no lattices provided. This could be investigated in further experiments by running a new ASR system on the DSTC2 dataset to obtain both lattices and cnets. However, these results will not be comparable to previous results on this dataset due to the different ASR output.
Comparison to the State of the Art
The current state of the art on the DSTC2 dataset in terms of joint goals accuracy is an ensemble of neural models based on hand-crafted update rules and RNNs BIBREF16 . Besides, this model uses a delexicalization mechanism that replaces mentions of slots or values from the DSTC2 ontology by a placeholder to learn value-independent patterns BIBREF8 , BIBREF23 . While this approach is suitable for small domains and languages with a simple morphology such as English, it becomes increasingly difficult to locate words or phrases corresponding to slots or values in wider domains or languages with a rich morphology BIBREF7 and we therefore abstained from delexicalization.
The best result for the joint requests was obtained by a ranking model based on hand-crafted features, which relies on separate SLU systems besides ASR BIBREF11 . SLU is often cast as sequence labeling problem, where each word in the utterance is annotated with its role in respect to the user's intent BIBREF24 , BIBREF25 , requiring training data with fine-grained word-level annotations in contrast to the turn-level dialog state annotations. Furthermore, a separate SLU component introduces an additional set of parameters to the SDS that has to be learned. Therefore, it has been argued to jointly perform SLU and DST in a single system BIBREF8 , which we follow in this work.
As a more comparable reference for our set-up, we provide the result of the neural DST system of BIBREF7 that like our approach does not use outputs of a separate SLU system nor delexicalized features. Our ensemble models outperform BIBREF7 for the joint requests but are a bit worse for the joint goals. We stress that our goal was not to reach for the state of the art but show that DST can benefit from encoding cnets.
Conclusion
As we show in this paper, ASR errors pose a major obstacle to accurate DST in SDSs. To reduce the error propagation, we suggest to exploit the rich ASR hypothesis space encoded in cnets that contain more correct hypotheses than conventionally used n-best lists. We develop a novel method to encode cnets via a GRU-based RNN and demonstrate that this leads to improved DST performance compared to encoding the best ASR hypothesis on the DSTC2 dataset.
In future experiments, we would like to explore further ways to leverage the scores of the hypotheses, for example by incorporating them as an independent feature rather than a direct weight in the model.
Acknowledgments
We thank our anonymous reviewers for their helpful feedback. Our work has been supported by the German Research Foundation (DFG) via a research grant to the project A8 within the Collaborative Research Center (SFB) 732 at the University of Stuttgart.
A. Hyper-Parameters | It is a network used to encode speech lattices to maintain a rich hypothesis space. |
f9751e0ca03f49663a5fc82b33527bc8be1ed0aa | f9751e0ca03f49663a5fc82b33527bc8be1ed0aa_0 | Q: What type of simulations of real-time data feeds are used for validaton?
Text: Introduction ::: Healthcare Information Technology and the Interoperability Problem
Since the early 1970s, healthcare information technology has moved toward comprehensive electronic medical records (EMR) in which almost every aspect of the patient's healthcare has been digitized and retained indefinitelyBIBREF0, which has vastly improved the efficiency with which patient information can be retained, communicated, and analyzed. At the same time, the healthcare industry has moved from a fee-for-service model to a value-based model, facilitated in part by the existence of such a record and in part by public policy, such as the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 BIBREF1, which provided financial incentives for the "meaningful use" of electronic medical records.
The realization of a holistic medical record has been slowed by various obstacles, chief among them is the problem of interoperability between systems. The problem of interoperability arises almost as soon as a healthcare organization begins to choose a vendor for their electronic medical record, when they are faced with a choice between an architecture based on a single monolithic system or a so-called best-of-breed approach involving multiple discrete systems, each chosen for its superior performance in a narrow domain. The monolith claims to handle all aspects of healthcare information management; the best-of-breed approach entails a multiplicity of systems, each of which may be superior in its domain but which are not smoothly integrated.
A major difference between the two architectures is how they solve the problem of interoperability. In the case of the monolith, the problem is solved by the system vendor, at least in principle, but at the cost to the customer of a loss of choice. In the best-of-breed approach, the problem of interoperability is shifted onto the customer, who incurs an often hefty cost in the form of a more complex systems architecture and the resulting need for specialized hardware, software, and staff to maintain it.
In a best-of-breed approach, the need for instantaneous intersystems communication is typically handled via an Enterprise Service Bus (ESB)BIBREF2, which ensures real-time message delivery to subscribing systems. Additionally, if the data is to be analyzed in combination, rather than in isolation within the silo of a single system, it must be recombined and stored outside of these systems. This is typically done in an Enterprise Data Warehouse (EDW)BIBREF3 and requires further specialized hardware, software, and staff. However, most EDWs are based on a batch-loading system that runs during off-peak hours for the previous calendar day's businessBIBREF3; thus, while an EDW can be a powerful tool for retrospective analysis, it is unsuitable to real-time applications.
Interoperability is a serious challenge that modern healthcare systems must address in order to adequately serve their patients. In this paper we demonstrate a hitherto underused approach that combines the attractive aspects of both an enterprise service bus and an enterprise data warehouse to arrive at real-time analytics.
Background ::: Health Level Seven Version 2 (HL7v2)
HL7v2 is a healthcare messaging standard developed by the standards organization Health Level Seven International. It first emerged in 1988 and today is the most widely used such standard, having been adopted by over ninety-five percent of health systems in the United States and thirty-five countries worldwide BIBREF4. As such, it is something of a universal medium in the field of healthcare interoperability, yet it is terse and, without specialized training and access to the standard reference, cryptic.
Each HL7 message describes an event in a healthcare workflow and breaks down hierarchically into segments, fields, components, subcomponents, repeated components, and so on. There are well over one hundred types of messages and several times as many types of segments in HL7v2. The current version of the specification, for HL7 v2.8, is well over 2,500 pages long and contains nearly one million words. BIBREF0 Partly as a consequence of this complexity, health interoperability has become a specialized field, replete with certifications and training and entire careers built on knowledge of HL7v2. An example HL7 message describing the following information is shown in Figure FIGREF4
The PID (Patient Identification) segment contains the demographic information of the patient. Eve E. Everywoman was born on 1962-03-20 and lives in Statesville OH. Her patient ID number (presumably assigned to her by the Good Health Hospital) is 555-44-4444.
The OBR (Observation Request) segment identifies the observation as it was originally ordered: 15545 GLUCOSE. The observation was ordered by Particia Primary MD and performed by Howard Hippocrates MD.
The OBX (Observation) segment contains the results of the observation: 182 mg/dl.
Background ::: Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR)
FHIR BIBREF5 is a new open standard for healthcare data developed by the same company that developed HL7v2. However, whereas HL7v2 uses an idiosyncratic data exchange format, FHIR uses data exchange formats based on those already in wide use on the World-Wide Web such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON) BIBREF6, as well as the web's familiar transfer control protocols such as HyperText Transfer Protocol Secure (HTTPS) and Representational State Transfer (REST) BIBREF6 and system of contextual hyperlinks implemented with Uniform Resource Locators / Identifiers (URL/URI) BIBREF7. This design choice simplifies interoperability and discoverability and enables applications to be built rapidly on top of FHIR by the large number of engineers already familiar with web application design without a steep learning curve.
In contrast to HL7v2, which is based on events in a healthcare workflow such as admit, discharge, and transfer, FHIR is built on the notion of conceptual entities from the healthcare domain, such as Patient, Encounter, and Observation, i.e. resources. Currently, FHIR encompasses 143 resources, each of which is described abstractly in the FHIR standard with the attributes Name, Flags, Cardinality, Type, and Description & Constraints. BIBREF7. In a concrete implementation of FHIR, resources are serialized to one of the data exchange formats listed above. An example of an FIHR XML message is shown in Figure FIGREF5.
Background ::: Semantic Web
The term Semantic Web BIBREF8 denotes an interconnected machine-readable network of information. In some ways it is analogous to the World-Wide Web, but with some crucial differences. The most important similarity is in the vision for the two technologies: Like the World-Wide Web, the Semantic Web was envisioned as a way for users from different institutions, countries, disciplines, etc. to exchange information openly and in doing so to add to the sum of human knowledge. The difference, however, is in the different emphases put on human readability versus machine readability: Whereas the World-Wide Web was intended to be visually rendered by one of any number of web browsers before being read by humans and therefore prioritizes fault tolerance and general compatibility over precision, the semantic web prioritizes precision and logical rigor in order for the information contained in it to be machine readable and used for logical inference.
The similarities continue in the technologies used to implement the two webs. Information in both the Semantic Web and the World-Wide Web is intended to be accessed using the familiar data exchange protocol Hypertext Transfer Protocol (HTTP) and addressed using Uniform Resource Identifiers (URI) for the Semantic Web and Uniform Resource Locations (URL) for the World-Wide Web that tell the agent/browser how to find linked information. Even the data exchange formats are remarkably similar: The World-Wide Web uses Hypertext Markup Language (HTML)BIBREF9, a tree-structured subset of Standard Generalized Markup Language (SGML)BIBREF10, whereas the Semantic Web uses a variety of tree-structured formats such as XML, JSON, Terse RDF Triple Language (i.e. Turtle/TTL)BIBREF11, etc.
The most significant difference between the World-Wide Web and the Semantic Web is in the type of information that they encode. The Semantic Web delivers a payload of simple logical statements known as triples, each consisting of a subject, predicate, and object, whereas the World-Wide Web delivers a series of directives to the web browser that govern the layout of the rendered page as well as the content of the page, in the form of text, images, videos, scripts, and so on. This difference in payloads corresponds to their different purposes – the payload is delivered in the first case to an intelligent agent and in the second case to a web browser.
In more technical terms, the semantic web can be thought of as a distributed directed graph whose vertices are resources and whose edges are statements describing those resources. In its openness and decentralized nature, it bears some resemblance to the World Wide Web; however, whereas the World Wide Web consists of ad hoc, unsynchronized data presented in a variety of formats, the semantic web is a machine-readable body of information that can be synchronized while still coming from a variety of sources.
Background ::: Resource Description Framework (RDF)
RDF is the backbone of the semantic webBIBREF8. It is described as a framework, rather than a protocol or a standard, because it is an abstact model of information whose stated goal is "to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines (a priori) the semantics of any application domain." BIBREF12 Its concrete realization is typically a serialization into one of several formats including XML, JSON, TTL, etc.
The basic unit of information in RDF is a statement expressed as a logical triple; that is, a statement of the form <subject> <predicate> <object>, in which the predicate expresses a relationship between the subject and the object: for instance, bloodPressure :value 120. The subject must be a resource, that is, an object consisting of one or more statements, and the object may be either a literal, that is, a simple numeric or textual value, or another resource. The predicate describes some aspect or property of the subject. Because both the subject and the object can be resources, the object may also be described by statements in which it is the subject, leading to a complex graph structure.
A group of statements can be used to perform inference on their resources, thus creating new statements and enriching the semantic universe of the data set. For instance, the canonical syllogism "Socrates is a man; all men are mortal; therefore, Socrates is mortal" can be reproduced in the two statements Socrates :isA man and man :is mortal, resulting in a synthesized third statement: Socrates :is mortal. RDF supports "inference, shared semantics across multiple standards and data formats, data integration, semantic data validation, compliance enforcement, SPARQL [SPARQL Protocol and RDF Query Language (SPARQL)] queries and other uses." BIBREF13.
Background ::: FHIR/RDF
One of the several formats into which FHIR can be serialized is RDF. However, because RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, there is the potential for a slight mismatch between the models. This comes up in two ways: One, RDF makes statements of fact, whereas FHIR makes records of events. The example given in the FHIR documentation is the difference between "patient x has viral pneumonia" (statement of fact) and "Dr. Jones diagnosed patient x with viral pneumonia" (record of event). Two, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts. The example given for this mismatch is "a modifier extension indicates that the surrounding element's meaning will likely be misunderstood if the modifier extension is not understood." The potential for serious error resulting from this mismatch is small, but it is worth bearing in mind when designing information systems.
Background ::: SPARQL Protocol and RDF Query Language (SPARQL)
RDF has an associated query language that can be used to search for matching statements, known as SPARQL. Although syntactically and semantically based on Structured Query Language (SQL), the information model over which it searches is RDF's directed graph of resources and statements, not the familiar relations stored in a relational database.
The syntax is beyond the scope of this paper, but in general SPARQL queries outline the shape of the graph they wish to find. For an example SPARQL query that searches for blood pressure readings over 120 b.p.m., see Figure FIGREF6.
Method
At a high level, the semantic enrichment engine is designed to take healthcare data in a variety of formats as input and store it in a triplestore database that users can query. In this way, the engine acts as both a collector, receiving messages from numerous sources, and a bus for delivering messages to multiple sources, as well as a real-time analytics platform. For example, a message from a vital signs monitor and from a registration system can be coalesced into a new stream containing blood pressure, temperature, and laboratory values for use in a machine learning model to predict sepsis.
To support future large-scale operations, a multi-protocol message passing system was used for inter-module communication. This modular design also allows different components to be swapped out seamlessly, provided they continue to communicate via the expected interface. Routines were developed to simulate input data based on the authors experience with real healthcare data. The reasons for this choice were twofold: One, healthcare data can be high in incidental complexity, requiring one-off code to handle unusual inputs, but not necessarily in such a way as to significantly alter the fundamental engineering choices in a semantic enrichment engine such as this one. Two, healthcare data is strictly regulated, and the process for obtaining access to healthcare data for research can be cumbersome and time-consuming.
A simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting, was used for simulation. In a production setting, the Java module that generates simulation data would be replaced by either a data source that directly writes to the input message queue or a Java module that intercepts or extracts production data, transforms it as needed, and writes it to the input message queue. A component-level view of the systems architecture is shown in Figure FIGREF7
Method ::: Class Hierarchy
The project was written in Java, with each major component in its own package. There is a top-level class named ActiveMQEnabled that handles common tasks, such as connecting to the message broker, logging, event handling, and other such functionality. Each type of component in the pipeline - input, encoder, store, query, output, and application - is a subclass of ActiveMQEnabled as well as a superclass to specific types of those components. Most components are able both to send and receive messages, with certain exceptions: for example, inputs can only send and outputs can only receive. Stores can both receive and send, but in the concrete implementation in this project, the TDB store only receives (queries are better handled as timed polls, rather than being event-driven).
Method ::: Inputs
In the first stage of the module, simulated inputs represent a variety of healthcare entities and arrive in a variety of formats: patients in a pipe-delimited list, encounters as FHIR messages, and observations as HL7v2 messages. As discussed in the Background section, all of these are widely used input formats in modern health systems and realistically represent the heterogeneous message exchanges that are likely to occur in a real healthcare setting. Each input is configurable with regard to message timing and frequency, and the vitals signs can be made to simulate various conditions such as hypertension or hypothermia. An example of a generate vital sign is shown in Figure FIGREF8
Method ::: Encoder
The encoder stage itself has two stages. In the first, input messages arriving at queues named according to the convention "INPUT.ENTITY.FORMAT" are retrieved, parsed, and transformed into internal representations of common domain objects, in this case Patient, Encounter, and Observation. In the second stage, these internal representations are transformed into internal representations of RDF graphs of FHIR resources and written out to the next message queue. By decoupling the parsing phase from the RDF-generating phase, the number of parsing and generating routines required for N sources and M resource types is reduced from N x M to N + M. This also allows parsing and generating jobs to be written in parallel and by different developers using the common internal representations as an intermediate layer. For instance, one developer could be writing the code to parse an HL7 ADT (admit/discharge/transfer) message while another developer was writing the code to turn this message into Patient, Encounter, and Observation resources. (Note that a single HL7 message can be used to create multiple FHIR resources BIBREF14). An example of a POJO to FIHR/RDF message encoder class is shown in Figure FIGREF9
Method ::: Store
The store stage writes RDF-encoded statements to a triplestore database (TDB). For this implementation, the database was Apache Jena Triplestore Database (TDB) BIBREF15, which operates as a local on-disk database, although it could equally be a distributed in-memory cache or other implementation in production. It is at this point that the incoming messages are truly conformed to a universal model, as TDB does not record any information relating to encoding. An example of a RDF to TDB (RDB Database) class is shown in Figure FIGREF10
Method ::: Query
The query stage polls the triplestore database for RDF graphs matching specified criteria, for instance, low blood pressure combined with low body temperature and high pulse rate, indicating hypothermia, or patients with blood pressure readings over a certain threshold, indicating hypertension. It passes matching patients on to the output stage for data capture or immediate use in applications.
SPARQL queries against FHIR/RDF (see Figure FIGREF6), can often be complex and verbose, simply because a high level of detail was required to represent healthcare data unambiguously in FHIR, and an equally high level of detail was required to extract it unambigously.
As a means of simplifying the work required to query the data, We considered a two-phase design in which the first layer would extract the relevant data from the TDB database in great detail before using RDF's CONSTRUCT syntax to build a simplified representation of the data for use by the second layer. This idea has potential, but after a few tries at writing the code to implement it, there was too much loss of detail for it to be worth pursuing in this iteration. In the end, the default option of writing a detailed, if verbose, RDF query once was deemed a better option than the added complexity and potential loss of fidelity of the two-layer approach.
Method ::: Output
In the output stage, the results of the queries in the previous stage are written out to an output destination such as a text file or a screen. This differs from the Application stage in that the input was intended to be written immediately to an output sink such as a file or screen on the local computer. Its use in this project was limited to debugging.
Method ::: Application
In the application stage, a variety of applications (complex event processors, common data models, machine learning models, etc.) receive the outputs of the queries from the prior stages and use them as inputs to particular applications. A high-level view of how the semantic encoder might be used in clinical workflow is shown in Figure FIGREF11
Several applications presented themselves as potentially benefiting from a semantic enrichment engine such as this one. One such application was complex event processing (CEP), in which streams of data are analyzed in search of events in real timeBIBREF16. From simple events more complex events can be derived, so that a number of individually innocuous events may add up to either an opportunity or a threat event. In a healthcare setting, this could mean monitoring patient vital signs and flagging them as high, low, or normal, then analyzing the combination of vital signs for a condition or set of conditions. Additionally, a patient's individual health conditions, such as comorbidities, recent procedures, and so on could be used to inform the meaning of the instantaneous vital signs as they are received. Using data from the TDB store, I was able to write several queries in Esper, a well-known complex event processing engineBIBREF17, to detect conditions that were initially simulated by the vital signs input, such as hypothermia or hypertension. To some extent, the RDF queries used to feed Esper overlapped with the capabilities of Esper itself, although Esper's query language EPL is much more versatile than SPARQL for event processing.
Another such project was the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM)BIBREF18. This is an analytical database intended to collate data from multiple partner data sources and conform it to a common representation, using standardized vocabularies such as LOINCBIBREF19 and SNOMED-CTBIBREF20 in order to facilitate collaborative research. Using data queried from the TDB store, I was able to build several data-loading jobs to populate an OMOP-CDM database. This application takes advantage of the semantic enrichment engine's ability to conform data from disparate sources, since by the application stage all the data has been conformed to FHIR/RDF and is ready to be loaded to the OMOP database with only one transformation (from FHIR/RDF to OMOP schemas).
Method ::: Validation
Health Level Seven International (HL7) provides a FHIR validator, which was useful for ensuring that the FHIR generated by the encoder was correctly formed. ShEx (Shape Expressions) BIBREF21 language is a language used for describing the expected shape of RDF and testing it for conformity to that shape. Its syntax is similar to Turtle and SPARQL, while its semantics resemble those of regular expression languages such as RelaxNG BIBREF22. I were limited in our ability to validate FHIR conformance due to limitations of the FHIR validation tool (vague error messages, program crashes, etc.)
Method ::: Challenges
Our needs are twofold and, at first, apparently contradictory. The first was to store data from disparate sources so that the sources could be joined up and benefit from synergies among the different semantic components embedded in the data. The second was to answer queries about the data over a finite time range. The challenge is that the mechanism that was to trigger the execution of a query, the receipt of a message from the store, happened with such frequency that the query engine quickly became overloaded and unable to respond in a timely fashion to new requests. This necessitated a redesign of parts of the encoder module and the query engine, such that each resource was timestamped when it was encoded and each query specified a time range within which to return results. Prior to this redesign, the query engine was querying the triple store each time a message arrived without specifying a time bound, resulting in a constantly increasing number of results that eventually would overmatch the system's capabilities.
Another challenge was that RDF does not easily support streamsBIBREF23. With each query, all matching results are returned, not only the new results since the last query. This means the result size of the query increases monotonically until the system is overwhelmed. To design around this, we timestamped each entity as it arrived and used this timestamp as a filter in the subsequent queries. This worked well and is not unlike what CEP systems do natively.
Conclusion
The semantic enrichment engine designed described in this paper has broad applicability in healthcare operations and research. The data exchange standards, protocols, databases, query languages, and so forth used to implement this system are freely available. This system has characteristics of both an enterprise service bus and an enterprise data warehouse, but augments the analytical capability of the former and addresses the high latency of the former. We expect the system can be used to inform artificial intelligence for inference, populate structured databases with enriched data streams, and derive new data for use in machine learning training. | simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting |
ce18c50dadab7b9f28141fe615fd7de69355d9dd | ce18c50dadab7b9f28141fe615fd7de69355d9dd_0 | Q: How are FHIR and RDF combined?
Text: Introduction ::: Healthcare Information Technology and the Interoperability Problem
Since the early 1970s, healthcare information technology has moved toward comprehensive electronic medical records (EMR) in which almost every aspect of the patient's healthcare has been digitized and retained indefinitelyBIBREF0, which has vastly improved the efficiency with which patient information can be retained, communicated, and analyzed. At the same time, the healthcare industry has moved from a fee-for-service model to a value-based model, facilitated in part by the existence of such a record and in part by public policy, such as the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 BIBREF1, which provided financial incentives for the "meaningful use" of electronic medical records.
The realization of a holistic medical record has been slowed by various obstacles, chief among them is the problem of interoperability between systems. The problem of interoperability arises almost as soon as a healthcare organization begins to choose a vendor for their electronic medical record, when they are faced with a choice between an architecture based on a single monolithic system or a so-called best-of-breed approach involving multiple discrete systems, each chosen for its superior performance in a narrow domain. The monolith claims to handle all aspects of healthcare information management; the best-of-breed approach entails a multiplicity of systems, each of which may be superior in its domain but which are not smoothly integrated.
A major difference between the two architectures is how they solve the problem of interoperability. In the case of the monolith, the problem is solved by the system vendor, at least in principle, but at the cost to the customer of a loss of choice. In the best-of-breed approach, the problem of interoperability is shifted onto the customer, who incurs an often hefty cost in the form of a more complex systems architecture and the resulting need for specialized hardware, software, and staff to maintain it.
In a best-of-breed approach, the need for instantaneous intersystems communication is typically handled via an Enterprise Service Bus (ESB)BIBREF2, which ensures real-time message delivery to subscribing systems. Additionally, if the data is to be analyzed in combination, rather than in isolation within the silo of a single system, it must be recombined and stored outside of these systems. This is typically done in an Enterprise Data Warehouse (EDW)BIBREF3 and requires further specialized hardware, software, and staff. However, most EDWs are based on a batch-loading system that runs during off-peak hours for the previous calendar day's businessBIBREF3; thus, while an EDW can be a powerful tool for retrospective analysis, it is unsuitable to real-time applications.
Interoperability is a serious challenge that modern healthcare systems must address in order to adequately serve their patients. In this paper we demonstrate a hitherto underused approach that combines the attractive aspects of both an enterprise service bus and an enterprise data warehouse to arrive at real-time analytics.
Background ::: Health Level Seven Version 2 (HL7v2)
HL7v2 is a healthcare messaging standard developed by the standards organization Health Level Seven International. It first emerged in 1988 and today is the most widely used such standard, having been adopted by over ninety-five percent of health systems in the United States and thirty-five countries worldwide BIBREF4. As such, it is something of a universal medium in the field of healthcare interoperability, yet it is terse and, without specialized training and access to the standard reference, cryptic.
Each HL7 message describes an event in a healthcare workflow and breaks down hierarchically into segments, fields, components, subcomponents, repeated components, and so on. There are well over one hundred types of messages and several times as many types of segments in HL7v2. The current version of the specification, for HL7 v2.8, is well over 2,500 pages long and contains nearly one million words. BIBREF0 Partly as a consequence of this complexity, health interoperability has become a specialized field, replete with certifications and training and entire careers built on knowledge of HL7v2. An example HL7 message describing the following information is shown in Figure FIGREF4
The PID (Patient Identification) segment contains the demographic information of the patient. Eve E. Everywoman was born on 1962-03-20 and lives in Statesville OH. Her patient ID number (presumably assigned to her by the Good Health Hospital) is 555-44-4444.
The OBR (Observation Request) segment identifies the observation as it was originally ordered: 15545 GLUCOSE. The observation was ordered by Particia Primary MD and performed by Howard Hippocrates MD.
The OBX (Observation) segment contains the results of the observation: 182 mg/dl.
Background ::: Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR)
FHIR BIBREF5 is a new open standard for healthcare data developed by the same company that developed HL7v2. However, whereas HL7v2 uses an idiosyncratic data exchange format, FHIR uses data exchange formats based on those already in wide use on the World-Wide Web such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON) BIBREF6, as well as the web's familiar transfer control protocols such as HyperText Transfer Protocol Secure (HTTPS) and Representational State Transfer (REST) BIBREF6 and system of contextual hyperlinks implemented with Uniform Resource Locators / Identifiers (URL/URI) BIBREF7. This design choice simplifies interoperability and discoverability and enables applications to be built rapidly on top of FHIR by the large number of engineers already familiar with web application design without a steep learning curve.
In contrast to HL7v2, which is based on events in a healthcare workflow such as admit, discharge, and transfer, FHIR is built on the notion of conceptual entities from the healthcare domain, such as Patient, Encounter, and Observation, i.e. resources. Currently, FHIR encompasses 143 resources, each of which is described abstractly in the FHIR standard with the attributes Name, Flags, Cardinality, Type, and Description & Constraints. BIBREF7. In a concrete implementation of FHIR, resources are serialized to one of the data exchange formats listed above. An example of an FIHR XML message is shown in Figure FIGREF5.
Background ::: Semantic Web
The term Semantic Web BIBREF8 denotes an interconnected machine-readable network of information. In some ways it is analogous to the World-Wide Web, but with some crucial differences. The most important similarity is in the vision for the two technologies: Like the World-Wide Web, the Semantic Web was envisioned as a way for users from different institutions, countries, disciplines, etc. to exchange information openly and in doing so to add to the sum of human knowledge. The difference, however, is in the different emphases put on human readability versus machine readability: Whereas the World-Wide Web was intended to be visually rendered by one of any number of web browsers before being read by humans and therefore prioritizes fault tolerance and general compatibility over precision, the semantic web prioritizes precision and logical rigor in order for the information contained in it to be machine readable and used for logical inference.
The similarities continue in the technologies used to implement the two webs. Information in both the Semantic Web and the World-Wide Web is intended to be accessed using the familiar data exchange protocol Hypertext Transfer Protocol (HTTP) and addressed using Uniform Resource Identifiers (URI) for the Semantic Web and Uniform Resource Locations (URL) for the World-Wide Web that tell the agent/browser how to find linked information. Even the data exchange formats are remarkably similar: The World-Wide Web uses Hypertext Markup Language (HTML)BIBREF9, a tree-structured subset of Standard Generalized Markup Language (SGML)BIBREF10, whereas the Semantic Web uses a variety of tree-structured formats such as XML, JSON, Terse RDF Triple Language (i.e. Turtle/TTL)BIBREF11, etc.
The most significant difference between the World-Wide Web and the Semantic Web is in the type of information that they encode. The Semantic Web delivers a payload of simple logical statements known as triples, each consisting of a subject, predicate, and object, whereas the World-Wide Web delivers a series of directives to the web browser that govern the layout of the rendered page as well as the content of the page, in the form of text, images, videos, scripts, and so on. This difference in payloads corresponds to their different purposes – the payload is delivered in the first case to an intelligent agent and in the second case to a web browser.
In more technical terms, the semantic web can be thought of as a distributed directed graph whose vertices are resources and whose edges are statements describing those resources. In its openness and decentralized nature, it bears some resemblance to the World Wide Web; however, whereas the World Wide Web consists of ad hoc, unsynchronized data presented in a variety of formats, the semantic web is a machine-readable body of information that can be synchronized while still coming from a variety of sources.
Background ::: Resource Description Framework (RDF)
RDF is the backbone of the semantic webBIBREF8. It is described as a framework, rather than a protocol or a standard, because it is an abstact model of information whose stated goal is "to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines (a priori) the semantics of any application domain." BIBREF12 Its concrete realization is typically a serialization into one of several formats including XML, JSON, TTL, etc.
The basic unit of information in RDF is a statement expressed as a logical triple; that is, a statement of the form <subject> <predicate> <object>, in which the predicate expresses a relationship between the subject and the object: for instance, bloodPressure :value 120. The subject must be a resource, that is, an object consisting of one or more statements, and the object may be either a literal, that is, a simple numeric or textual value, or another resource. The predicate describes some aspect or property of the subject. Because both the subject and the object can be resources, the object may also be described by statements in which it is the subject, leading to a complex graph structure.
A group of statements can be used to perform inference on their resources, thus creating new statements and enriching the semantic universe of the data set. For instance, the canonical syllogism "Socrates is a man; all men are mortal; therefore, Socrates is mortal" can be reproduced in the two statements Socrates :isA man and man :is mortal, resulting in a synthesized third statement: Socrates :is mortal. RDF supports "inference, shared semantics across multiple standards and data formats, data integration, semantic data validation, compliance enforcement, SPARQL [SPARQL Protocol and RDF Query Language (SPARQL)] queries and other uses." BIBREF13.
Background ::: FHIR/RDF
One of the several formats into which FHIR can be serialized is RDF. However, because RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, there is the potential for a slight mismatch between the models. This comes up in two ways: One, RDF makes statements of fact, whereas FHIR makes records of events. The example given in the FHIR documentation is the difference between "patient x has viral pneumonia" (statement of fact) and "Dr. Jones diagnosed patient x with viral pneumonia" (record of event). Two, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts. The example given for this mismatch is "a modifier extension indicates that the surrounding element's meaning will likely be misunderstood if the modifier extension is not understood." The potential for serious error resulting from this mismatch is small, but it is worth bearing in mind when designing information systems.
Background ::: SPARQL Protocol and RDF Query Language (SPARQL)
RDF has an associated query language that can be used to search for matching statements, known as SPARQL. Although syntactically and semantically based on Structured Query Language (SQL), the information model over which it searches is RDF's directed graph of resources and statements, not the familiar relations stored in a relational database.
The syntax is beyond the scope of this paper, but in general SPARQL queries outline the shape of the graph they wish to find. For an example SPARQL query that searches for blood pressure readings over 120 b.p.m., see Figure FIGREF6.
Method
At a high level, the semantic enrichment engine is designed to take healthcare data in a variety of formats as input and store it in a triplestore database that users can query. In this way, the engine acts as both a collector, receiving messages from numerous sources, and a bus for delivering messages to multiple sources, as well as a real-time analytics platform. For example, a message from a vital signs monitor and from a registration system can be coalesced into a new stream containing blood pressure, temperature, and laboratory values for use in a machine learning model to predict sepsis.
To support future large-scale operations, a multi-protocol message passing system was used for inter-module communication. This modular design also allows different components to be swapped out seamlessly, provided they continue to communicate via the expected interface. Routines were developed to simulate input data based on the authors experience with real healthcare data. The reasons for this choice were twofold: One, healthcare data can be high in incidental complexity, requiring one-off code to handle unusual inputs, but not necessarily in such a way as to significantly alter the fundamental engineering choices in a semantic enrichment engine such as this one. Two, healthcare data is strictly regulated, and the process for obtaining access to healthcare data for research can be cumbersome and time-consuming.
A simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting, was used for simulation. In a production setting, the Java module that generates simulation data would be replaced by either a data source that directly writes to the input message queue or a Java module that intercepts or extracts production data, transforms it as needed, and writes it to the input message queue. A component-level view of the systems architecture is shown in Figure FIGREF7
Method ::: Class Hierarchy
The project was written in Java, with each major component in its own package. There is a top-level class named ActiveMQEnabled that handles common tasks, such as connecting to the message broker, logging, event handling, and other such functionality. Each type of component in the pipeline - input, encoder, store, query, output, and application - is a subclass of ActiveMQEnabled as well as a superclass to specific types of those components. Most components are able both to send and receive messages, with certain exceptions: for example, inputs can only send and outputs can only receive. Stores can both receive and send, but in the concrete implementation in this project, the TDB store only receives (queries are better handled as timed polls, rather than being event-driven).
Method ::: Inputs
In the first stage of the module, simulated inputs represent a variety of healthcare entities and arrive in a variety of formats: patients in a pipe-delimited list, encounters as FHIR messages, and observations as HL7v2 messages. As discussed in the Background section, all of these are widely used input formats in modern health systems and realistically represent the heterogeneous message exchanges that are likely to occur in a real healthcare setting. Each input is configurable with regard to message timing and frequency, and the vitals signs can be made to simulate various conditions such as hypertension or hypothermia. An example of a generate vital sign is shown in Figure FIGREF8
Method ::: Encoder
The encoder stage itself has two stages. In the first, input messages arriving at queues named according to the convention "INPUT.ENTITY.FORMAT" are retrieved, parsed, and transformed into internal representations of common domain objects, in this case Patient, Encounter, and Observation. In the second stage, these internal representations are transformed into internal representations of RDF graphs of FHIR resources and written out to the next message queue. By decoupling the parsing phase from the RDF-generating phase, the number of parsing and generating routines required for N sources and M resource types is reduced from N x M to N + M. This also allows parsing and generating jobs to be written in parallel and by different developers using the common internal representations as an intermediate layer. For instance, one developer could be writing the code to parse an HL7 ADT (admit/discharge/transfer) message while another developer was writing the code to turn this message into Patient, Encounter, and Observation resources. (Note that a single HL7 message can be used to create multiple FHIR resources BIBREF14). An example of a POJO to FIHR/RDF message encoder class is shown in Figure FIGREF9
Method ::: Store
The store stage writes RDF-encoded statements to a triplestore database (TDB). For this implementation, the database was Apache Jena Triplestore Database (TDB) BIBREF15, which operates as a local on-disk database, although it could equally be a distributed in-memory cache or other implementation in production. It is at this point that the incoming messages are truly conformed to a universal model, as TDB does not record any information relating to encoding. An example of a RDF to TDB (RDB Database) class is shown in Figure FIGREF10
Method ::: Query
The query stage polls the triplestore database for RDF graphs matching specified criteria, for instance, low blood pressure combined with low body temperature and high pulse rate, indicating hypothermia, or patients with blood pressure readings over a certain threshold, indicating hypertension. It passes matching patients on to the output stage for data capture or immediate use in applications.
SPARQL queries against FHIR/RDF (see Figure FIGREF6), can often be complex and verbose, simply because a high level of detail was required to represent healthcare data unambiguously in FHIR, and an equally high level of detail was required to extract it unambigously.
As a means of simplifying the work required to query the data, We considered a two-phase design in which the first layer would extract the relevant data from the TDB database in great detail before using RDF's CONSTRUCT syntax to build a simplified representation of the data for use by the second layer. This idea has potential, but after a few tries at writing the code to implement it, there was too much loss of detail for it to be worth pursuing in this iteration. In the end, the default option of writing a detailed, if verbose, RDF query once was deemed a better option than the added complexity and potential loss of fidelity of the two-layer approach.
Method ::: Output
In the output stage, the results of the queries in the previous stage are written out to an output destination such as a text file or a screen. This differs from the Application stage in that the input was intended to be written immediately to an output sink such as a file or screen on the local computer. Its use in this project was limited to debugging.
Method ::: Application
In the application stage, a variety of applications (complex event processors, common data models, machine learning models, etc.) receive the outputs of the queries from the prior stages and use them as inputs to particular applications. A high-level view of how the semantic encoder might be used in clinical workflow is shown in Figure FIGREF11
Several applications presented themselves as potentially benefiting from a semantic enrichment engine such as this one. One such application was complex event processing (CEP), in which streams of data are analyzed in search of events in real timeBIBREF16. From simple events more complex events can be derived, so that a number of individually innocuous events may add up to either an opportunity or a threat event. In a healthcare setting, this could mean monitoring patient vital signs and flagging them as high, low, or normal, then analyzing the combination of vital signs for a condition or set of conditions. Additionally, a patient's individual health conditions, such as comorbidities, recent procedures, and so on could be used to inform the meaning of the instantaneous vital signs as they are received. Using data from the TDB store, I was able to write several queries in Esper, a well-known complex event processing engineBIBREF17, to detect conditions that were initially simulated by the vital signs input, such as hypothermia or hypertension. To some extent, the RDF queries used to feed Esper overlapped with the capabilities of Esper itself, although Esper's query language EPL is much more versatile than SPARQL for event processing.
Another such project was the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM)BIBREF18. This is an analytical database intended to collate data from multiple partner data sources and conform it to a common representation, using standardized vocabularies such as LOINCBIBREF19 and SNOMED-CTBIBREF20 in order to facilitate collaborative research. Using data queried from the TDB store, I was able to build several data-loading jobs to populate an OMOP-CDM database. This application takes advantage of the semantic enrichment engine's ability to conform data from disparate sources, since by the application stage all the data has been conformed to FHIR/RDF and is ready to be loaded to the OMOP database with only one transformation (from FHIR/RDF to OMOP schemas).
Method ::: Validation
Health Level Seven International (HL7) provides a FHIR validator, which was useful for ensuring that the FHIR generated by the encoder was correctly formed. ShEx (Shape Expressions) BIBREF21 language is a language used for describing the expected shape of RDF and testing it for conformity to that shape. Its syntax is similar to Turtle and SPARQL, while its semantics resemble those of regular expression languages such as RelaxNG BIBREF22. I were limited in our ability to validate FHIR conformance due to limitations of the FHIR validation tool (vague error messages, program crashes, etc.)
Method ::: Challenges
Our needs are twofold and, at first, apparently contradictory. The first was to store data from disparate sources so that the sources could be joined up and benefit from synergies among the different semantic components embedded in the data. The second was to answer queries about the data over a finite time range. The challenge is that the mechanism that was to trigger the execution of a query, the receipt of a message from the store, happened with such frequency that the query engine quickly became overloaded and unable to respond in a timely fashion to new requests. This necessitated a redesign of parts of the encoder module and the query engine, such that each resource was timestamped when it was encoded and each query specified a time range within which to return results. Prior to this redesign, the query engine was querying the triple store each time a message arrived without specifying a time bound, resulting in a constantly increasing number of results that eventually would overmatch the system's capabilities.
Another challenge was that RDF does not easily support streamsBIBREF23. With each query, all matching results are returned, not only the new results since the last query. This means the result size of the query increases monotonically until the system is overwhelmed. To design around this, we timestamped each entity as it arrived and used this timestamp as a filter in the subsequent queries. This worked well and is not unlike what CEP systems do natively.
Conclusion
The semantic enrichment engine designed described in this paper has broad applicability in healthcare operations and research. The data exchange standards, protocols, databases, query languages, and so forth used to implement this system are freely available. This system has characteristics of both an enterprise service bus and an enterprise data warehouse, but augments the analytical capability of the former and addresses the high latency of the former. We expect the system can be used to inform artificial intelligence for inference, populate structured databases with enriched data streams, and derive new data for use in machine learning training. | RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, RDF makes statements of fact, whereas FHIR makes records of events, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts |
5a230fe4f0204bf2eebc0e944cf8defaf33d165c | 5a230fe4f0204bf2eebc0e944cf8defaf33d165c_0 | Q: What are the differences between FHIR and RDF?
Text: Introduction ::: Healthcare Information Technology and the Interoperability Problem
Since the early 1970s, healthcare information technology has moved toward comprehensive electronic medical records (EMR) in which almost every aspect of the patient's healthcare has been digitized and retained indefinitelyBIBREF0, which has vastly improved the efficiency with which patient information can be retained, communicated, and analyzed. At the same time, the healthcare industry has moved from a fee-for-service model to a value-based model, facilitated in part by the existence of such a record and in part by public policy, such as the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 BIBREF1, which provided financial incentives for the "meaningful use" of electronic medical records.
The realization of a holistic medical record has been slowed by various obstacles, chief among them is the problem of interoperability between systems. The problem of interoperability arises almost as soon as a healthcare organization begins to choose a vendor for their electronic medical record, when they are faced with a choice between an architecture based on a single monolithic system or a so-called best-of-breed approach involving multiple discrete systems, each chosen for its superior performance in a narrow domain. The monolith claims to handle all aspects of healthcare information management; the best-of-breed approach entails a multiplicity of systems, each of which may be superior in its domain but which are not smoothly integrated.
A major difference between the two architectures is how they solve the problem of interoperability. In the case of the monolith, the problem is solved by the system vendor, at least in principle, but at the cost to the customer of a loss of choice. In the best-of-breed approach, the problem of interoperability is shifted onto the customer, who incurs an often hefty cost in the form of a more complex systems architecture and the resulting need for specialized hardware, software, and staff to maintain it.
In a best-of-breed approach, the need for instantaneous intersystems communication is typically handled via an Enterprise Service Bus (ESB)BIBREF2, which ensures real-time message delivery to subscribing systems. Additionally, if the data is to be analyzed in combination, rather than in isolation within the silo of a single system, it must be recombined and stored outside of these systems. This is typically done in an Enterprise Data Warehouse (EDW)BIBREF3 and requires further specialized hardware, software, and staff. However, most EDWs are based on a batch-loading system that runs during off-peak hours for the previous calendar day's businessBIBREF3; thus, while an EDW can be a powerful tool for retrospective analysis, it is unsuitable to real-time applications.
Interoperability is a serious challenge that modern healthcare systems must address in order to adequately serve their patients. In this paper we demonstrate a hitherto underused approach that combines the attractive aspects of both an enterprise service bus and an enterprise data warehouse to arrive at real-time analytics.
Background ::: Health Level Seven Version 2 (HL7v2)
HL7v2 is a healthcare messaging standard developed by the standards organization Health Level Seven International. It first emerged in 1988 and today is the most widely used such standard, having been adopted by over ninety-five percent of health systems in the United States and thirty-five countries worldwide BIBREF4. As such, it is something of a universal medium in the field of healthcare interoperability, yet it is terse and, without specialized training and access to the standard reference, cryptic.
Each HL7 message describes an event in a healthcare workflow and breaks down hierarchically into segments, fields, components, subcomponents, repeated components, and so on. There are well over one hundred types of messages and several times as many types of segments in HL7v2. The current version of the specification, for HL7 v2.8, is well over 2,500 pages long and contains nearly one million words. BIBREF0 Partly as a consequence of this complexity, health interoperability has become a specialized field, replete with certifications and training and entire careers built on knowledge of HL7v2. An example HL7 message describing the following information is shown in Figure FIGREF4
The PID (Patient Identification) segment contains the demographic information of the patient. Eve E. Everywoman was born on 1962-03-20 and lives in Statesville OH. Her patient ID number (presumably assigned to her by the Good Health Hospital) is 555-44-4444.
The OBR (Observation Request) segment identifies the observation as it was originally ordered: 15545 GLUCOSE. The observation was ordered by Particia Primary MD and performed by Howard Hippocrates MD.
The OBX (Observation) segment contains the results of the observation: 182 mg/dl.
Background ::: Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR)
FHIR BIBREF5 is a new open standard for healthcare data developed by the same company that developed HL7v2. However, whereas HL7v2 uses an idiosyncratic data exchange format, FHIR uses data exchange formats based on those already in wide use on the World-Wide Web such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON) BIBREF6, as well as the web's familiar transfer control protocols such as HyperText Transfer Protocol Secure (HTTPS) and Representational State Transfer (REST) BIBREF6 and system of contextual hyperlinks implemented with Uniform Resource Locators / Identifiers (URL/URI) BIBREF7. This design choice simplifies interoperability and discoverability and enables applications to be built rapidly on top of FHIR by the large number of engineers already familiar with web application design without a steep learning curve.
In contrast to HL7v2, which is based on events in a healthcare workflow such as admit, discharge, and transfer, FHIR is built on the notion of conceptual entities from the healthcare domain, such as Patient, Encounter, and Observation, i.e. resources. Currently, FHIR encompasses 143 resources, each of which is described abstractly in the FHIR standard with the attributes Name, Flags, Cardinality, Type, and Description & Constraints. BIBREF7. In a concrete implementation of FHIR, resources are serialized to one of the data exchange formats listed above. An example of an FIHR XML message is shown in Figure FIGREF5.
Background ::: Semantic Web
The term Semantic Web BIBREF8 denotes an interconnected machine-readable network of information. In some ways it is analogous to the World-Wide Web, but with some crucial differences. The most important similarity is in the vision for the two technologies: Like the World-Wide Web, the Semantic Web was envisioned as a way for users from different institutions, countries, disciplines, etc. to exchange information openly and in doing so to add to the sum of human knowledge. The difference, however, is in the different emphases put on human readability versus machine readability: Whereas the World-Wide Web was intended to be visually rendered by one of any number of web browsers before being read by humans and therefore prioritizes fault tolerance and general compatibility over precision, the semantic web prioritizes precision and logical rigor in order for the information contained in it to be machine readable and used for logical inference.
The similarities continue in the technologies used to implement the two webs. Information in both the Semantic Web and the World-Wide Web is intended to be accessed using the familiar data exchange protocol Hypertext Transfer Protocol (HTTP) and addressed using Uniform Resource Identifiers (URI) for the Semantic Web and Uniform Resource Locations (URL) for the World-Wide Web that tell the agent/browser how to find linked information. Even the data exchange formats are remarkably similar: The World-Wide Web uses Hypertext Markup Language (HTML)BIBREF9, a tree-structured subset of Standard Generalized Markup Language (SGML)BIBREF10, whereas the Semantic Web uses a variety of tree-structured formats such as XML, JSON, Terse RDF Triple Language (i.e. Turtle/TTL)BIBREF11, etc.
The most significant difference between the World-Wide Web and the Semantic Web is in the type of information that they encode. The Semantic Web delivers a payload of simple logical statements known as triples, each consisting of a subject, predicate, and object, whereas the World-Wide Web delivers a series of directives to the web browser that govern the layout of the rendered page as well as the content of the page, in the form of text, images, videos, scripts, and so on. This difference in payloads corresponds to their different purposes – the payload is delivered in the first case to an intelligent agent and in the second case to a web browser.
In more technical terms, the semantic web can be thought of as a distributed directed graph whose vertices are resources and whose edges are statements describing those resources. In its openness and decentralized nature, it bears some resemblance to the World Wide Web; however, whereas the World Wide Web consists of ad hoc, unsynchronized data presented in a variety of formats, the semantic web is a machine-readable body of information that can be synchronized while still coming from a variety of sources.
Background ::: Resource Description Framework (RDF)
RDF is the backbone of the semantic webBIBREF8. It is described as a framework, rather than a protocol or a standard, because it is an abstact model of information whose stated goal is "to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines (a priori) the semantics of any application domain." BIBREF12 Its concrete realization is typically a serialization into one of several formats including XML, JSON, TTL, etc.
The basic unit of information in RDF is a statement expressed as a logical triple; that is, a statement of the form <subject> <predicate> <object>, in which the predicate expresses a relationship between the subject and the object: for instance, bloodPressure :value 120. The subject must be a resource, that is, an object consisting of one or more statements, and the object may be either a literal, that is, a simple numeric or textual value, or another resource. The predicate describes some aspect or property of the subject. Because both the subject and the object can be resources, the object may also be described by statements in which it is the subject, leading to a complex graph structure.
A group of statements can be used to perform inference on their resources, thus creating new statements and enriching the semantic universe of the data set. For instance, the canonical syllogism "Socrates is a man; all men are mortal; therefore, Socrates is mortal" can be reproduced in the two statements Socrates :isA man and man :is mortal, resulting in a synthesized third statement: Socrates :is mortal. RDF supports "inference, shared semantics across multiple standards and data formats, data integration, semantic data validation, compliance enforcement, SPARQL [SPARQL Protocol and RDF Query Language (SPARQL)] queries and other uses." BIBREF13.
Background ::: FHIR/RDF
One of the several formats into which FHIR can be serialized is RDF. However, because RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, there is the potential for a slight mismatch between the models. This comes up in two ways: One, RDF makes statements of fact, whereas FHIR makes records of events. The example given in the FHIR documentation is the difference between "patient x has viral pneumonia" (statement of fact) and "Dr. Jones diagnosed patient x with viral pneumonia" (record of event). Two, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts. The example given for this mismatch is "a modifier extension indicates that the surrounding element's meaning will likely be misunderstood if the modifier extension is not understood." The potential for serious error resulting from this mismatch is small, but it is worth bearing in mind when designing information systems.
Background ::: SPARQL Protocol and RDF Query Language (SPARQL)
RDF has an associated query language that can be used to search for matching statements, known as SPARQL. Although syntactically and semantically based on Structured Query Language (SQL), the information model over which it searches is RDF's directed graph of resources and statements, not the familiar relations stored in a relational database.
The syntax is beyond the scope of this paper, but in general SPARQL queries outline the shape of the graph they wish to find. For an example SPARQL query that searches for blood pressure readings over 120 b.p.m., see Figure FIGREF6.
Method
At a high level, the semantic enrichment engine is designed to take healthcare data in a variety of formats as input and store it in a triplestore database that users can query. In this way, the engine acts as both a collector, receiving messages from numerous sources, and a bus for delivering messages to multiple sources, as well as a real-time analytics platform. For example, a message from a vital signs monitor and from a registration system can be coalesced into a new stream containing blood pressure, temperature, and laboratory values for use in a machine learning model to predict sepsis.
To support future large-scale operations, a multi-protocol message passing system was used for inter-module communication. This modular design also allows different components to be swapped out seamlessly, provided they continue to communicate via the expected interface. Routines were developed to simulate input data based on the authors experience with real healthcare data. The reasons for this choice were twofold: One, healthcare data can be high in incidental complexity, requiring one-off code to handle unusual inputs, but not necessarily in such a way as to significantly alter the fundamental engineering choices in a semantic enrichment engine such as this one. Two, healthcare data is strictly regulated, and the process for obtaining access to healthcare data for research can be cumbersome and time-consuming.
A simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting, was used for simulation. In a production setting, the Java module that generates simulation data would be replaced by either a data source that directly writes to the input message queue or a Java module that intercepts or extracts production data, transforms it as needed, and writes it to the input message queue. A component-level view of the systems architecture is shown in Figure FIGREF7
Method ::: Class Hierarchy
The project was written in Java, with each major component in its own package. There is a top-level class named ActiveMQEnabled that handles common tasks, such as connecting to the message broker, logging, event handling, and other such functionality. Each type of component in the pipeline - input, encoder, store, query, output, and application - is a subclass of ActiveMQEnabled as well as a superclass to specific types of those components. Most components are able both to send and receive messages, with certain exceptions: for example, inputs can only send and outputs can only receive. Stores can both receive and send, but in the concrete implementation in this project, the TDB store only receives (queries are better handled as timed polls, rather than being event-driven).
Method ::: Inputs
In the first stage of the module, simulated inputs represent a variety of healthcare entities and arrive in a variety of formats: patients in a pipe-delimited list, encounters as FHIR messages, and observations as HL7v2 messages. As discussed in the Background section, all of these are widely used input formats in modern health systems and realistically represent the heterogeneous message exchanges that are likely to occur in a real healthcare setting. Each input is configurable with regard to message timing and frequency, and the vitals signs can be made to simulate various conditions such as hypertension or hypothermia. An example of a generate vital sign is shown in Figure FIGREF8
Method ::: Encoder
The encoder stage itself has two stages. In the first, input messages arriving at queues named according to the convention "INPUT.ENTITY.FORMAT" are retrieved, parsed, and transformed into internal representations of common domain objects, in this case Patient, Encounter, and Observation. In the second stage, these internal representations are transformed into internal representations of RDF graphs of FHIR resources and written out to the next message queue. By decoupling the parsing phase from the RDF-generating phase, the number of parsing and generating routines required for N sources and M resource types is reduced from N x M to N + M. This also allows parsing and generating jobs to be written in parallel and by different developers using the common internal representations as an intermediate layer. For instance, one developer could be writing the code to parse an HL7 ADT (admit/discharge/transfer) message while another developer was writing the code to turn this message into Patient, Encounter, and Observation resources. (Note that a single HL7 message can be used to create multiple FHIR resources BIBREF14). An example of a POJO to FIHR/RDF message encoder class is shown in Figure FIGREF9
Method ::: Store
The store stage writes RDF-encoded statements to a triplestore database (TDB). For this implementation, the database was Apache Jena Triplestore Database (TDB) BIBREF15, which operates as a local on-disk database, although it could equally be a distributed in-memory cache or other implementation in production. It is at this point that the incoming messages are truly conformed to a universal model, as TDB does not record any information relating to encoding. An example of a RDF to TDB (RDB Database) class is shown in Figure FIGREF10
Method ::: Query
The query stage polls the triplestore database for RDF graphs matching specified criteria, for instance, low blood pressure combined with low body temperature and high pulse rate, indicating hypothermia, or patients with blood pressure readings over a certain threshold, indicating hypertension. It passes matching patients on to the output stage for data capture or immediate use in applications.
SPARQL queries against FHIR/RDF (see Figure FIGREF6), can often be complex and verbose, simply because a high level of detail was required to represent healthcare data unambiguously in FHIR, and an equally high level of detail was required to extract it unambigously.
As a means of simplifying the work required to query the data, We considered a two-phase design in which the first layer would extract the relevant data from the TDB database in great detail before using RDF's CONSTRUCT syntax to build a simplified representation of the data for use by the second layer. This idea has potential, but after a few tries at writing the code to implement it, there was too much loss of detail for it to be worth pursuing in this iteration. In the end, the default option of writing a detailed, if verbose, RDF query once was deemed a better option than the added complexity and potential loss of fidelity of the two-layer approach.
Method ::: Output
In the output stage, the results of the queries in the previous stage are written out to an output destination such as a text file or a screen. This differs from the Application stage in that the input was intended to be written immediately to an output sink such as a file or screen on the local computer. Its use in this project was limited to debugging.
Method ::: Application
In the application stage, a variety of applications (complex event processors, common data models, machine learning models, etc.) receive the outputs of the queries from the prior stages and use them as inputs to particular applications. A high-level view of how the semantic encoder might be used in clinical workflow is shown in Figure FIGREF11
Several applications presented themselves as potentially benefiting from a semantic enrichment engine such as this one. One such application was complex event processing (CEP), in which streams of data are analyzed in search of events in real timeBIBREF16. From simple events more complex events can be derived, so that a number of individually innocuous events may add up to either an opportunity or a threat event. In a healthcare setting, this could mean monitoring patient vital signs and flagging them as high, low, or normal, then analyzing the combination of vital signs for a condition or set of conditions. Additionally, a patient's individual health conditions, such as comorbidities, recent procedures, and so on could be used to inform the meaning of the instantaneous vital signs as they are received. Using data from the TDB store, I was able to write several queries in Esper, a well-known complex event processing engineBIBREF17, to detect conditions that were initially simulated by the vital signs input, such as hypothermia or hypertension. To some extent, the RDF queries used to feed Esper overlapped with the capabilities of Esper itself, although Esper's query language EPL is much more versatile than SPARQL for event processing.
Another such project was the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM)BIBREF18. This is an analytical database intended to collate data from multiple partner data sources and conform it to a common representation, using standardized vocabularies such as LOINCBIBREF19 and SNOMED-CTBIBREF20 in order to facilitate collaborative research. Using data queried from the TDB store, I was able to build several data-loading jobs to populate an OMOP-CDM database. This application takes advantage of the semantic enrichment engine's ability to conform data from disparate sources, since by the application stage all the data has been conformed to FHIR/RDF and is ready to be loaded to the OMOP database with only one transformation (from FHIR/RDF to OMOP schemas).
Method ::: Validation
Health Level Seven International (HL7) provides a FHIR validator, which was useful for ensuring that the FHIR generated by the encoder was correctly formed. ShEx (Shape Expressions) BIBREF21 language is a language used for describing the expected shape of RDF and testing it for conformity to that shape. Its syntax is similar to Turtle and SPARQL, while its semantics resemble those of regular expression languages such as RelaxNG BIBREF22. I were limited in our ability to validate FHIR conformance due to limitations of the FHIR validation tool (vague error messages, program crashes, etc.)
Method ::: Challenges
Our needs are twofold and, at first, apparently contradictory. The first was to store data from disparate sources so that the sources could be joined up and benefit from synergies among the different semantic components embedded in the data. The second was to answer queries about the data over a finite time range. The challenge is that the mechanism that was to trigger the execution of a query, the receipt of a message from the store, happened with such frequency that the query engine quickly became overloaded and unable to respond in a timely fashion to new requests. This necessitated a redesign of parts of the encoder module and the query engine, such that each resource was timestamped when it was encoded and each query specified a time range within which to return results. Prior to this redesign, the query engine was querying the triple store each time a message arrived without specifying a time bound, resulting in a constantly increasing number of results that eventually would overmatch the system's capabilities.
Another challenge was that RDF does not easily support streamsBIBREF23. With each query, all matching results are returned, not only the new results since the last query. This means the result size of the query increases monotonically until the system is overwhelmed. To design around this, we timestamped each entity as it arrived and used this timestamp as a filter in the subsequent queries. This worked well and is not unlike what CEP systems do natively.
Conclusion
The semantic enrichment engine designed described in this paper has broad applicability in healthcare operations and research. The data exchange standards, protocols, databases, query languages, and so forth used to implement this system are freely available. This system has characteristics of both an enterprise service bus and an enterprise data warehouse, but augments the analytical capability of the former and addresses the high latency of the former. We expect the system can be used to inform artificial intelligence for inference, populate structured databases with enriched data streams, and derive new data for use in machine learning training. | One of the several formats into which FHIR can be serialized is RDF, there is the potential for a slight mismatch between the models |
d3bb06d730efbedd30ec226fe8cf828a4773bf5c | d3bb06d730efbedd30ec226fe8cf828a4773bf5c_0 | Q: What do FHIR and RDF stand for?
Text: Introduction ::: Healthcare Information Technology and the Interoperability Problem
Since the early 1970s, healthcare information technology has moved toward comprehensive electronic medical records (EMR) in which almost every aspect of the patient's healthcare has been digitized and retained indefinitelyBIBREF0, which has vastly improved the efficiency with which patient information can be retained, communicated, and analyzed. At the same time, the healthcare industry has moved from a fee-for-service model to a value-based model, facilitated in part by the existence of such a record and in part by public policy, such as the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 BIBREF1, which provided financial incentives for the "meaningful use" of electronic medical records.
The realization of a holistic medical record has been slowed by various obstacles, chief among them is the problem of interoperability between systems. The problem of interoperability arises almost as soon as a healthcare organization begins to choose a vendor for their electronic medical record, when they are faced with a choice between an architecture based on a single monolithic system or a so-called best-of-breed approach involving multiple discrete systems, each chosen for its superior performance in a narrow domain. The monolith claims to handle all aspects of healthcare information management; the best-of-breed approach entails a multiplicity of systems, each of which may be superior in its domain but which are not smoothly integrated.
A major difference between the two architectures is how they solve the problem of interoperability. In the case of the monolith, the problem is solved by the system vendor, at least in principle, but at the cost to the customer of a loss of choice. In the best-of-breed approach, the problem of interoperability is shifted onto the customer, who incurs an often hefty cost in the form of a more complex systems architecture and the resulting need for specialized hardware, software, and staff to maintain it.
In a best-of-breed approach, the need for instantaneous intersystems communication is typically handled via an Enterprise Service Bus (ESB)BIBREF2, which ensures real-time message delivery to subscribing systems. Additionally, if the data is to be analyzed in combination, rather than in isolation within the silo of a single system, it must be recombined and stored outside of these systems. This is typically done in an Enterprise Data Warehouse (EDW)BIBREF3 and requires further specialized hardware, software, and staff. However, most EDWs are based on a batch-loading system that runs during off-peak hours for the previous calendar day's businessBIBREF3; thus, while an EDW can be a powerful tool for retrospective analysis, it is unsuitable to real-time applications.
Interoperability is a serious challenge that modern healthcare systems must address in order to adequately serve their patients. In this paper we demonstrate a hitherto underused approach that combines the attractive aspects of both an enterprise service bus and an enterprise data warehouse to arrive at real-time analytics.
Background ::: Health Level Seven Version 2 (HL7v2)
HL7v2 is a healthcare messaging standard developed by the standards organization Health Level Seven International. It first emerged in 1988 and today is the most widely used such standard, having been adopted by over ninety-five percent of health systems in the United States and thirty-five countries worldwide BIBREF4. As such, it is something of a universal medium in the field of healthcare interoperability, yet it is terse and, without specialized training and access to the standard reference, cryptic.
Each HL7 message describes an event in a healthcare workflow and breaks down hierarchically into segments, fields, components, subcomponents, repeated components, and so on. There are well over one hundred types of messages and several times as many types of segments in HL7v2. The current version of the specification, for HL7 v2.8, is well over 2,500 pages long and contains nearly one million words. BIBREF0 Partly as a consequence of this complexity, health interoperability has become a specialized field, replete with certifications and training and entire careers built on knowledge of HL7v2. An example HL7 message describing the following information is shown in Figure FIGREF4
The PID (Patient Identification) segment contains the demographic information of the patient. Eve E. Everywoman was born on 1962-03-20 and lives in Statesville OH. Her patient ID number (presumably assigned to her by the Good Health Hospital) is 555-44-4444.
The OBR (Observation Request) segment identifies the observation as it was originally ordered: 15545 GLUCOSE. The observation was ordered by Particia Primary MD and performed by Howard Hippocrates MD.
The OBX (Observation) segment contains the results of the observation: 182 mg/dl.
Background ::: Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR)
FHIR BIBREF5 is a new open standard for healthcare data developed by the same company that developed HL7v2. However, whereas HL7v2 uses an idiosyncratic data exchange format, FHIR uses data exchange formats based on those already in wide use on the World-Wide Web such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON) BIBREF6, as well as the web's familiar transfer control protocols such as HyperText Transfer Protocol Secure (HTTPS) and Representational State Transfer (REST) BIBREF6 and system of contextual hyperlinks implemented with Uniform Resource Locators / Identifiers (URL/URI) BIBREF7. This design choice simplifies interoperability and discoverability and enables applications to be built rapidly on top of FHIR by the large number of engineers already familiar with web application design without a steep learning curve.
In contrast to HL7v2, which is based on events in a healthcare workflow such as admit, discharge, and transfer, FHIR is built on the notion of conceptual entities from the healthcare domain, such as Patient, Encounter, and Observation, i.e. resources. Currently, FHIR encompasses 143 resources, each of which is described abstractly in the FHIR standard with the attributes Name, Flags, Cardinality, Type, and Description & Constraints. BIBREF7. In a concrete implementation of FHIR, resources are serialized to one of the data exchange formats listed above. An example of an FIHR XML message is shown in Figure FIGREF5.
Background ::: Semantic Web
The term Semantic Web BIBREF8 denotes an interconnected machine-readable network of information. In some ways it is analogous to the World-Wide Web, but with some crucial differences. The most important similarity is in the vision for the two technologies: Like the World-Wide Web, the Semantic Web was envisioned as a way for users from different institutions, countries, disciplines, etc. to exchange information openly and in doing so to add to the sum of human knowledge. The difference, however, is in the different emphases put on human readability versus machine readability: Whereas the World-Wide Web was intended to be visually rendered by one of any number of web browsers before being read by humans and therefore prioritizes fault tolerance and general compatibility over precision, the semantic web prioritizes precision and logical rigor in order for the information contained in it to be machine readable and used for logical inference.
The similarities continue in the technologies used to implement the two webs. Information in both the Semantic Web and the World-Wide Web is intended to be accessed using the familiar data exchange protocol Hypertext Transfer Protocol (HTTP) and addressed using Uniform Resource Identifiers (URI) for the Semantic Web and Uniform Resource Locations (URL) for the World-Wide Web that tell the agent/browser how to find linked information. Even the data exchange formats are remarkably similar: The World-Wide Web uses Hypertext Markup Language (HTML)BIBREF9, a tree-structured subset of Standard Generalized Markup Language (SGML)BIBREF10, whereas the Semantic Web uses a variety of tree-structured formats such as XML, JSON, Terse RDF Triple Language (i.e. Turtle/TTL)BIBREF11, etc.
The most significant difference between the World-Wide Web and the Semantic Web is in the type of information that they encode. The Semantic Web delivers a payload of simple logical statements known as triples, each consisting of a subject, predicate, and object, whereas the World-Wide Web delivers a series of directives to the web browser that govern the layout of the rendered page as well as the content of the page, in the form of text, images, videos, scripts, and so on. This difference in payloads corresponds to their different purposes – the payload is delivered in the first case to an intelligent agent and in the second case to a web browser.
In more technical terms, the semantic web can be thought of as a distributed directed graph whose vertices are resources and whose edges are statements describing those resources. In its openness and decentralized nature, it bears some resemblance to the World Wide Web; however, whereas the World Wide Web consists of ad hoc, unsynchronized data presented in a variety of formats, the semantic web is a machine-readable body of information that can be synchronized while still coming from a variety of sources.
Background ::: Resource Description Framework (RDF)
RDF is the backbone of the semantic webBIBREF8. It is described as a framework, rather than a protocol or a standard, because it is an abstact model of information whose stated goal is "to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines (a priori) the semantics of any application domain." BIBREF12 Its concrete realization is typically a serialization into one of several formats including XML, JSON, TTL, etc.
The basic unit of information in RDF is a statement expressed as a logical triple; that is, a statement of the form <subject> <predicate> <object>, in which the predicate expresses a relationship between the subject and the object: for instance, bloodPressure :value 120. The subject must be a resource, that is, an object consisting of one or more statements, and the object may be either a literal, that is, a simple numeric or textual value, or another resource. The predicate describes some aspect or property of the subject. Because both the subject and the object can be resources, the object may also be described by statements in which it is the subject, leading to a complex graph structure.
A group of statements can be used to perform inference on their resources, thus creating new statements and enriching the semantic universe of the data set. For instance, the canonical syllogism "Socrates is a man; all men are mortal; therefore, Socrates is mortal" can be reproduced in the two statements Socrates :isA man and man :is mortal, resulting in a synthesized third statement: Socrates :is mortal. RDF supports "inference, shared semantics across multiple standards and data formats, data integration, semantic data validation, compliance enforcement, SPARQL [SPARQL Protocol and RDF Query Language (SPARQL)] queries and other uses." BIBREF13.
Background ::: FHIR/RDF
One of the several formats into which FHIR can be serialized is RDF. However, because RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, there is the potential for a slight mismatch between the models. This comes up in two ways: One, RDF makes statements of fact, whereas FHIR makes records of events. The example given in the FHIR documentation is the difference between "patient x has viral pneumonia" (statement of fact) and "Dr. Jones diagnosed patient x with viral pneumonia" (record of event). Two, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts. The example given for this mismatch is "a modifier extension indicates that the surrounding element's meaning will likely be misunderstood if the modifier extension is not understood." The potential for serious error resulting from this mismatch is small, but it is worth bearing in mind when designing information systems.
Background ::: SPARQL Protocol and RDF Query Language (SPARQL)
RDF has an associated query language that can be used to search for matching statements, known as SPARQL. Although syntactically and semantically based on Structured Query Language (SQL), the information model over which it searches is RDF's directed graph of resources and statements, not the familiar relations stored in a relational database.
The syntax is beyond the scope of this paper, but in general SPARQL queries outline the shape of the graph they wish to find. For an example SPARQL query that searches for blood pressure readings over 120 b.p.m., see Figure FIGREF6.
Method
At a high level, the semantic enrichment engine is designed to take healthcare data in a variety of formats as input and store it in a triplestore database that users can query. In this way, the engine acts as both a collector, receiving messages from numerous sources, and a bus for delivering messages to multiple sources, as well as a real-time analytics platform. For example, a message from a vital signs monitor and from a registration system can be coalesced into a new stream containing blood pressure, temperature, and laboratory values for use in a machine learning model to predict sepsis.
To support future large-scale operations, a multi-protocol message passing system was used for inter-module communication. This modular design also allows different components to be swapped out seamlessly, provided they continue to communicate via the expected interface. Routines were developed to simulate input data based on the authors experience with real healthcare data. The reasons for this choice were twofold: One, healthcare data can be high in incidental complexity, requiring one-off code to handle unusual inputs, but not necessarily in such a way as to significantly alter the fundamental engineering choices in a semantic enrichment engine such as this one. Two, healthcare data is strictly regulated, and the process for obtaining access to healthcare data for research can be cumbersome and time-consuming.
A simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting, was used for simulation. In a production setting, the Java module that generates simulation data would be replaced by either a data source that directly writes to the input message queue or a Java module that intercepts or extracts production data, transforms it as needed, and writes it to the input message queue. A component-level view of the systems architecture is shown in Figure FIGREF7
Method ::: Class Hierarchy
The project was written in Java, with each major component in its own package. There is a top-level class named ActiveMQEnabled that handles common tasks, such as connecting to the message broker, logging, event handling, and other such functionality. Each type of component in the pipeline - input, encoder, store, query, output, and application - is a subclass of ActiveMQEnabled as well as a superclass to specific types of those components. Most components are able both to send and receive messages, with certain exceptions: for example, inputs can only send and outputs can only receive. Stores can both receive and send, but in the concrete implementation in this project, the TDB store only receives (queries are better handled as timed polls, rather than being event-driven).
Method ::: Inputs
In the first stage of the module, simulated inputs represent a variety of healthcare entities and arrive in a variety of formats: patients in a pipe-delimited list, encounters as FHIR messages, and observations as HL7v2 messages. As discussed in the Background section, all of these are widely used input formats in modern health systems and realistically represent the heterogeneous message exchanges that are likely to occur in a real healthcare setting. Each input is configurable with regard to message timing and frequency, and the vitals signs can be made to simulate various conditions such as hypertension or hypothermia. An example of a generate vital sign is shown in Figure FIGREF8
Method ::: Encoder
The encoder stage itself has two stages. In the first, input messages arriving at queues named according to the convention "INPUT.ENTITY.FORMAT" are retrieved, parsed, and transformed into internal representations of common domain objects, in this case Patient, Encounter, and Observation. In the second stage, these internal representations are transformed into internal representations of RDF graphs of FHIR resources and written out to the next message queue. By decoupling the parsing phase from the RDF-generating phase, the number of parsing and generating routines required for N sources and M resource types is reduced from N x M to N + M. This also allows parsing and generating jobs to be written in parallel and by different developers using the common internal representations as an intermediate layer. For instance, one developer could be writing the code to parse an HL7 ADT (admit/discharge/transfer) message while another developer was writing the code to turn this message into Patient, Encounter, and Observation resources. (Note that a single HL7 message can be used to create multiple FHIR resources BIBREF14). An example of a POJO to FIHR/RDF message encoder class is shown in Figure FIGREF9
Method ::: Store
The store stage writes RDF-encoded statements to a triplestore database (TDB). For this implementation, the database was Apache Jena Triplestore Database (TDB) BIBREF15, which operates as a local on-disk database, although it could equally be a distributed in-memory cache or other implementation in production. It is at this point that the incoming messages are truly conformed to a universal model, as TDB does not record any information relating to encoding. An example of a RDF to TDB (RDB Database) class is shown in Figure FIGREF10
Method ::: Query
The query stage polls the triplestore database for RDF graphs matching specified criteria, for instance, low blood pressure combined with low body temperature and high pulse rate, indicating hypothermia, or patients with blood pressure readings over a certain threshold, indicating hypertension. It passes matching patients on to the output stage for data capture or immediate use in applications.
SPARQL queries against FHIR/RDF (see Figure FIGREF6), can often be complex and verbose, simply because a high level of detail was required to represent healthcare data unambiguously in FHIR, and an equally high level of detail was required to extract it unambigously.
As a means of simplifying the work required to query the data, We considered a two-phase design in which the first layer would extract the relevant data from the TDB database in great detail before using RDF's CONSTRUCT syntax to build a simplified representation of the data for use by the second layer. This idea has potential, but after a few tries at writing the code to implement it, there was too much loss of detail for it to be worth pursuing in this iteration. In the end, the default option of writing a detailed, if verbose, RDF query once was deemed a better option than the added complexity and potential loss of fidelity of the two-layer approach.
Method ::: Output
In the output stage, the results of the queries in the previous stage are written out to an output destination such as a text file or a screen. This differs from the Application stage in that the input was intended to be written immediately to an output sink such as a file or screen on the local computer. Its use in this project was limited to debugging.
Method ::: Application
In the application stage, a variety of applications (complex event processors, common data models, machine learning models, etc.) receive the outputs of the queries from the prior stages and use them as inputs to particular applications. A high-level view of how the semantic encoder might be used in clinical workflow is shown in Figure FIGREF11
Several applications presented themselves as potentially benefiting from a semantic enrichment engine such as this one. One such application was complex event processing (CEP), in which streams of data are analyzed in search of events in real timeBIBREF16. From simple events more complex events can be derived, so that a number of individually innocuous events may add up to either an opportunity or a threat event. In a healthcare setting, this could mean monitoring patient vital signs and flagging them as high, low, or normal, then analyzing the combination of vital signs for a condition or set of conditions. Additionally, a patient's individual health conditions, such as comorbidities, recent procedures, and so on could be used to inform the meaning of the instantaneous vital signs as they are received. Using data from the TDB store, I was able to write several queries in Esper, a well-known complex event processing engineBIBREF17, to detect conditions that were initially simulated by the vital signs input, such as hypothermia or hypertension. To some extent, the RDF queries used to feed Esper overlapped with the capabilities of Esper itself, although Esper's query language EPL is much more versatile than SPARQL for event processing.
Another such project was the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM)BIBREF18. This is an analytical database intended to collate data from multiple partner data sources and conform it to a common representation, using standardized vocabularies such as LOINCBIBREF19 and SNOMED-CTBIBREF20 in order to facilitate collaborative research. Using data queried from the TDB store, I was able to build several data-loading jobs to populate an OMOP-CDM database. This application takes advantage of the semantic enrichment engine's ability to conform data from disparate sources, since by the application stage all the data has been conformed to FHIR/RDF and is ready to be loaded to the OMOP database with only one transformation (from FHIR/RDF to OMOP schemas).
Method ::: Validation
Health Level Seven International (HL7) provides a FHIR validator, which was useful for ensuring that the FHIR generated by the encoder was correctly formed. ShEx (Shape Expressions) BIBREF21 language is a language used for describing the expected shape of RDF and testing it for conformity to that shape. Its syntax is similar to Turtle and SPARQL, while its semantics resemble those of regular expression languages such as RelaxNG BIBREF22. I were limited in our ability to validate FHIR conformance due to limitations of the FHIR validation tool (vague error messages, program crashes, etc.)
Method ::: Challenges
Our needs are twofold and, at first, apparently contradictory. The first was to store data from disparate sources so that the sources could be joined up and benefit from synergies among the different semantic components embedded in the data. The second was to answer queries about the data over a finite time range. The challenge is that the mechanism that was to trigger the execution of a query, the receipt of a message from the store, happened with such frequency that the query engine quickly became overloaded and unable to respond in a timely fashion to new requests. This necessitated a redesign of parts of the encoder module and the query engine, such that each resource was timestamped when it was encoded and each query specified a time range within which to return results. Prior to this redesign, the query engine was querying the triple store each time a message arrived without specifying a time bound, resulting in a constantly increasing number of results that eventually would overmatch the system's capabilities.
Another challenge was that RDF does not easily support streamsBIBREF23. With each query, all matching results are returned, not only the new results since the last query. This means the result size of the query increases monotonically until the system is overwhelmed. To design around this, we timestamped each entity as it arrived and used this timestamp as a filter in the subsequent queries. This worked well and is not unlike what CEP systems do natively.
Conclusion
The semantic enrichment engine designed described in this paper has broad applicability in healthcare operations and research. The data exchange standards, protocols, databases, query languages, and so forth used to implement this system are freely available. This system has characteristics of both an enterprise service bus and an enterprise data warehouse, but augments the analytical capability of the former and addresses the high latency of the former. We expect the system can be used to inform artificial intelligence for inference, populate structured databases with enriched data streams, and derive new data for use in machine learning training. | Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR), Resource Description Framework (RDF) |
2255c36c8c7ed6084da577b480eb01d349f52943 | 2255c36c8c7ed6084da577b480eb01d349f52943_0 | Q: What is the motivation behind the work? Why question generation is an important task?
Text: Introduction
Existing question generating systems reported in the literature involve human-generated templates, including cloze type BIBREF0, rule-based BIBREF1, BIBREF2, or semi-automatic questions BIBREF3, BIBREF4, BIBREF5. On the other hand, machine learned models developed recently have used recurrent neural networks (RNNs) to perform sequence transduction, i.e. sequence-to-sequence BIBREF6, BIBREF7. In this work, we investigated an automatic question generation system based on a machine learning model that uses transformers instead of RNNs BIBREF8, BIBREF9. Our goal was to generate questions without templates and with minimal human involvement using machine learning transformers that have been demonstrated to train faster and better than RNNs. Such a system would benefit educators by saving time to generate quizzes and tests.
Background and Related Work
A relatively simple method for question generation is the fill-in-the-blank approach, which is also known as cloze tasks. Such a method typically involves the sentence first being tokenized and tagged for part-of-speech with the named entity or noun part of the sentence masked out. These generated questions are an exact match to the one in the reading passage except for the missing word or phrase. Although fill-in-the-blank questions are often used for reading comprehension, answering such questions correctly may not necessarily indicate comprehension if it is too easy to match the question to the relevant sentence in the passage. To improve fill in the blank type questions, a prior study used a supervised machine learning model to generate fill-in-the-blank type questions. The model paraphrases the sentence from the passage with the missing word by anonymizing entity markers BIBREF0.
Semi-automatic methods can also be use for question generation. Semi-automatic question generation involves human-generated templates in combination with querying the linked database repositories to complete the question BIBREF3, BIBREF4. The answer to the question is also extracted from the linked database. If the question is to be answered selecting from multiple choices, then distractors could also be selected from the database and randomly generated as incorrect choices for the answer. Another example of template-based question-and-answer generator using linked data is called Sherlock that has been shown to generate questions with varying levels of difficulty BIBREF5. However, designing a large set of high quality questions using semi-automatic question generation methods can be cognitively demanding and time-consuming. The types of questions created are also constrained to the templates. Generating a large dataset of questions is therefore cumbersome.
Other automatic question generators require human-made rules for the model to follow BIBREF1, BIBREF2. Educators are recruited to define the rules that will convert declarative sentences into interrogative questions BIBREF10, BIBREF11, BIBREF12. The rules generated requires the educator to possess both linguistic knowledge and subject knowledge. As with the template-based methods described above, this rules-based method can also be time-consuming and cognitively demanding. Moreover, the quality of the questions is limited by the quality of the handcrafted rules, and rules-based approaches are not scalable beyond human capacity.
Perhaps the most automated method reported thus far utilizes RNNs as sequence transduction (seq2seq) models to generate questions from sentences or passages BIBREF6, BIBREF7. In the most successful variant of RNNs, the Long Short-Term Memory (LSTM) networks, the model reads from left to right and includes an encoder and a decoder BIBREF13. The encoder takes the input and converts it to hidden vectors, while the decoder takes the vectors from the encoder and creates its own hidden vector to predict the next word based on the previous hidden vector BIBREF13. The hidden vector to the decoder stores all of the information about the context. The components in between the encoder and the decoder of the seq2seq model consists of attention, beam search, and bucketing. The attention mechanism takes the input to the decoder and allows the decoder to analyze the input sequence selectively. Beam search mechanism allows the decoder to select the highest probability word occurrence based on previous words. The bucketing mechanism allows the length of sequences to vary based on what we designate the bucket size to be. The decoder is then rewarded for correctly predicting the next word and penalized for incorrect predictions.
In this study, we developed a seq2seq model to automatically generate questions from Wikipedia passages. Our goal is to produce plausible questions with minimal human intervention that can assist educators in developing their quizzes and tests. Our model is based on transformers instead of RNNs. Transformers can train faster than RNNs because it is more parallelizable, working well with large and limited datasets BIBREF8, BIBREF9.
Transformers can also achieve better performance at a fraction of the training cost. Like the RNN approach, transformers have an encoder and a decoder. Transformers also incorporate the beam search and bucketing mechanisms. Unlike RNNs, transformers adopt multiple attention heads without requiring any recurrence, though recurrence can be added. The self-attention mechanism used is the scaled dot-product attention according to
where $d$ is the dimension (number of columns) of the input queries $Q$, keys $K$, and values $V$. By using self-attention, transformers can account for the whole sequence in its entirety and bidirectionally. For multi-head attention with $h$ heads that jointly attend to different representation subspaces at different positions given a sequence of length $m$ and the matrix $H\in \mathbf {R}^{m \times d}$, the result is
where the projections are learned parameter matrices $H^W_i,H^K_i,H^V_i\in \mathbf {R}^{(d \times d)/h}$ and $W^O\in \mathbf {R}^{(d \times d)}$.
Models utilizing transformers have achieved state-of-the-art performance on many NLP tasks, including question answering BIBREF14, BIBREF15. It is therefore interesting to study how transformers might be used to generate questions by training on the inverted SQuAD.
Experimental Methods ::: Data.
In this study, we used the Stanford Question Answering Dataset (SQuAD). SQuAD is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage BIBREF16. To generate the data for SQuAD, the top 10,000 English Wikipedia articles were ranked by Project Nayuki's Wikipedia's internal PageRanks as high-quality. Paragraphs that are longer than 500 characters were then extracted from the articles and partitioned into a training set (80%), a development set (10%), and a test set (10%). Only the former two datasets are publicly available. Crowdworkers were then employed to generate the questions and then the answers to the questions based on the extracted paragraphs. Another subset of crowdworkers were then asked to answer the questions that were generated given the corresponding passage to compare the model's answer with human generated answers and provide a benchmark for machine learning models.
Experimental Methods ::: Pre-processing.
We used the publicly available data from SQuAD to train our model to generate the questions. We used SQuAD's training and dev sets as our training and test sets, respectively. The reading passage, question, and answer data were pre-processed as described in the next section. For the test set, we provided the model with the pre-processed reading passages and answers that were never seen by the model. We inverted SQuAD by training a machine learning model to infer a question given a reading passage and an answer separated by a special token (i.e., `*') as input.
For pre-processing the reading passages, questions and answers, spaCy was used for named entity recognition and part-of-speech tagging BIBREF17, and WordPiece was used for tokenization BIBREF18. To ensure intelligible outputs, stop words are removed from the context passages and answers but not the questions. After lowercasing, tokenizing and removing the stop words, the named entities are then replaced with their respective tags to better allow the model to generalize and learn patterns in the data. We address a variety of named and numeric entities, including companies, locations, organizations and products, etc. (Table TABREF5). To account for multiple occurrences of a named entity type in the context passage, we also included an index after the named entity tag. As an example, consider the following context passage:
Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as "Super Bowl L"), so that the logo could prominently feature the Arabic numerals 50.
Applying the pre-processing described above, including the indexed-named entity tag replacement but not yet removing stop words, would produce
EVENT 0 DATE 0 was an NORP 0 football game to determine the champion of ORG 0 ( ORG 1 ) for DATE 1 . the NORP 0 football conference ( ORG 2 ) champion ORG 3 defeated ORG 4 ( ORG 5 ) champion ORG 6 24 – 10 to earn their ORDINAL 0 EVENT 0 title . the game was played on DATE 2 , at FAC 0 in FAC 1 at GPE 0 , GPE 1 . as this was the ORDINAL 1 EVENT 0 , the league emphasized the " golden anniversary " with various gold - themed initiatives , as well as temporarily suspend ##ing the tradition of naming each EVENT 0 game with LANGUAGE 0 nu ##meral ##s ( under which the game would have been known as " EVENT 0 l " ) , so that the logo could prominently feature the LANGUAGE 1 nu ##meral ##s DATE 0 .
Note that the index is separated from the named entity tag by a space and therefore interpreted by the model as a separate token so that named entities of similar types can be associated and generalized from without sacrificing the ability to distinguish between different entities of the same type. This spacing is necessary since we do not employ character-level embedding.
To generate the sub-word embeddings, we used the pre-trained WordPiece model from BERT, which has a 30,000 token vocabulary. WordPiece is a statistical technique used to segment text into tokens at the sub-word level. The vocabulary is initialized with all individual characters and iteratively aggregates the most frequently and likely combination of symbols into a vocabulary. The generation of WordPieces allows the model to capture the meaning of commonly occurring word segments, such as the root word suspend from suspending in the example context passage above. This dispenses with the need for the model to learn different variants or conjugations of a word.
Each input to the model comprised of a concatenation of the pre-processed answer and context passage. The most commonly agreed upon answer was chosen from among the three plausible answers for each question in SQuAD. Unlike prior question generation studies using SQuAD by isolating the sentence containing the answer, here we include the entire passage because the answers can depend on the context outside of the answer-containing sentence. Compared to RNNs used in prior studies, transformers allow us to more conveniently train and perform inference on longer sequence lengths. The model was developed with TensorFlow BIBREF19 and Tensor2Tensor BIBREF20, and then trained with an Nvidia Tesla T4 GPU for 1 million training steps.
To evaluate and analyze the results, the generated questions are post-processed by removing unnecessary spaces and consolidating the resulting WordPieces into a single coherent word. In other words, the results were de-tokenized using BERT's pre-trained WordPiece model.
Results and Discussion
To measure the model's question formulating ability, we calculated the word error rate (WER) between the generated questions and the corresponding questions from SQuAD. The SQuAD questions were used as the reference questions, as they ideally ask for the answers provided. WER is also known as the edit distance, which is a measure of similarity at the word level between generated questions and the target questions from SQuAD.
In essence, WER is the Levenshtein distance applied to a sequence of words. Differences between the two sequences can include sequence insertions, deletions and substitutions. WER can be calculated according to
where $S$ is the number of word substitutions, $D$ is the number of word deletions, $I$ is the number of word insertions, $C$ is the number of correct words, and $N$ is the total number of words in the reference sequence (i.e., $N = S + D + C$).
A low WER would indicate that a model-generated question is similar to the reference question, while a high WER means that the generated question differs significantly from the reference question. It should be noted that a high WER does not necessarily invalidate the generated question, as different questions can have the same answers, and there could be various ways of phrasing the same question. On the other hand, a situation with low WER of 1 could be due to the question missing a pertinent word to convey the correct idea. Despite these shortcomings in using WER as a metric of success, the WER can reflect our model's effectiveness in generating questions that are similar to those of SQuAD based on the given reading passage and answer. WER can be used for initial analyses that can lead to deeper insights as discussed further below.
Using the SQuAD dev set as our test set, our model generated 10,570 questions. The questions generated were mostly grammatically correct and related to the topic of the context passage. Fig. FIGREF7 shows the WER distribution, which has a mean of 9.66 words. 0.05% of the total questions generated by the model were an exact match to the corresponding SQuAD questions. For our discussion and analysis, we examine generated questions with various different WERs to gain insight into the quality of the questions. 9.94% of the model-generated questions have a WER less than or equal to 5, 56.38% have a WER between 6 and 10, 26.41% with a WER between 11 and 15, 5.81% with a WER between 16 and 20, and 1.45% of the generated questions have a WER greater than 21 (Fig. FIGREF7). In addition to the WER, the number of words in the questions (Figs. FIGREF10 and FIGREF11) and the first word of the questions (Figs. FIGREF8 and FIGREF9) are also considered below.
As shown in Fig. FIGREF7, our model was able to generate the exact questions as those corresponding to SQuAD with a WER of 0 for a small portion of instances. These types of questions tend to be relatively simpler and shorter, which are easier to learn as apparent in select examples from Table TABREF5. The model was also able to learn about and utilize synonyms, as apparent in the following examples where the WER is 1. Instead of using based as in the target question from SQuAD, “where is ORG 0 based?", the model used located to generate “where is ORG 0 located?". Although the term area can encompass many meanings, city and area can be synonymous depending on the context and therefore “what is the largest area of GPE 1?" generated by the model has the same meaning as “what is the largest city of GPE 1?" from SQuAD. The ability to capture relative meaning between words indicates that applying a pre-trained contextualized language model can improve performance in future studies.
Beyond exact matches, a low WER does not guarantee that a model-generated question has the same meaning as the target SQuAD question. Consider the following examples where the WERs are all 1, but the meanings differ between the generated and target questions. Sometimes the model produced a question in past tense when the target question from SQuAD is in present tense, e.g. “who was the president of GPE 3?" generated by the model versus “who is the president of GPE 3?" from SQuAD. In some cases, the named entity type matched, but the index did not, e.g. “where was ORG 2 located?" generated by the model versus “where was ORG 0 located?" from SQuAD. This case of swapping the index of a given named entity type could be a consequence of the pre-processing employed. Despite the different meanings, the questions generated are plausible questions that could be reasonably asked given the same context passages.
Since the questions generated by our model on the SQuAD dev set have an average WER of 9.66, we examined the questions with a WER of 8 to 10 (Table TABREF12) to see whether the questions are structured properly and whether they maintain the same meaning as the target SQuAD questions. As shown in Table TABREF12, it was challenging for the model to produce questions as complex as those from SQuAD, which resulted in a large WER. The average word count for the generated questions is 8 words per question (Fig. FIGREF10), while most of the questions from SQuAD are more detailed and complex with a total average word count of 12 (Fig. FIGREF11). The structure of the questions generated by the model are simpler and less detailed than those from SQuAD, but most are nevertheless plausible questions that are grammatically correct. The model-generated questions are relevant to the topic of the context passage and has the correct type of asking words to start the question. For example, although the target question from SQuAD is “by what means were scientists able to liquefy air?", the model can generate “what is the name of the process of water?". The questions have a WER of 9 and both can be interpreted as referring to the transformation process of water; more specifically, the condensation process. The model may have not sufficiently learned about the term liquefy but was still able to ask a similar question given limitations in the dataset used for training.
We next examined questions in the high WER regime, i.e. WER of 20 or more. Questions generated by the model still reflect the answer and context passage of interest. For example, given inputs for the target SQuAD question “who was responsible for the authorship of a paper published on real time - computations?", the model generated “what did PERSON 3 write?". Although both questions ask about authorship, the model's question is asking for a different type of answer, as indicated by the asking word of the questions (i.e., who vs. what).
To understand how the model chooses the asking word, we plot the first-word frequencies in descending order for SQuAD and model-generated questions (Figs. FIGREF8 and FIGREF9). Questions from SQuAD predominantly involve what, which reflects the first-word distribution of the training set as well. As the first-word is usually the asking word, training data imbalance most likely caused the model to be biased towards generating what questions. While the questions from SQuAD involve over 21 different words that initiate the questions (Figs. FIGREF9), our model only uses 10 different words to initiate questions (Figs. FIGREF8). The lack of diversity demonstrates that our model is not as well versed in asking questions as the crowdworkers in SQuAD and forms less elaborate types of questions. After all, our model generates an average of 8 words per question (Fig. FIGREF10), whereas the SQuAD questions have an average of 12 words per question (Fig. FIGREF11).
Conclusion and Future Work
We demonstrate that a transformer model can be trained to generate questions with correct grammar and relevancy to the context passage and answers provided. WER analyses was applied to diagnose shortcomings and guide future improvements. We observed that a low WER could be due to syntactic similarity but semantic disagreement, while two questions with syntactic divergence but similar meaning could result in a high WER. Since our results does not exhibit issues relating to contextual and syntactic roles of words within a generated question, other popular metrics (BLEU, ROUGE, F1-score, etc.) would lead to similar findings BIBREF21. Perhaps a better approach to evaluating question generation models is to apply state-of-the-art question answering models from SQuAD's leaderboard to measure how many answers agree.
To improve the model, more and balanced data can be provided to train the model to reduce the asking word bias. One method that can be used to obtain more data is through data augmentation by back-translation BIBREF22. The original SQuAD can be translated into another language such as French. The translated text could then be translated back into English to generate a variation of the context, question and answers that provide more training data for the model. Another data augmentation method is to paraphrase SQuAD's data to get another variation of the text BIBREF23, but one would have to ensure that pertinent information is not sacrificed in the summarization. The augmented data should then be sampled to reduce bias as much as possible. Other pre-processing variations can be considered. We tried including stopwords and removing the answer form the context passages but did not see improvements. Recent advancements in pre-trained bidirectionally contextualized language models can also be incorporated BIBREF14, BIBREF15, which would require a decoder to be added to the pre-trained model. | Such a system would benefit educators by saving time to generate quizzes and tests. |
9e391c8325b48f6119ca4f3d428b1b2b037f5c13 | 9e391c8325b48f6119ca4f3d428b1b2b037f5c13_0 | Q: Why did they choose WER as evaluation metric?
Text: Introduction
Existing question generating systems reported in the literature involve human-generated templates, including cloze type BIBREF0, rule-based BIBREF1, BIBREF2, or semi-automatic questions BIBREF3, BIBREF4, BIBREF5. On the other hand, machine learned models developed recently have used recurrent neural networks (RNNs) to perform sequence transduction, i.e. sequence-to-sequence BIBREF6, BIBREF7. In this work, we investigated an automatic question generation system based on a machine learning model that uses transformers instead of RNNs BIBREF8, BIBREF9. Our goal was to generate questions without templates and with minimal human involvement using machine learning transformers that have been demonstrated to train faster and better than RNNs. Such a system would benefit educators by saving time to generate quizzes and tests.
Background and Related Work
A relatively simple method for question generation is the fill-in-the-blank approach, which is also known as cloze tasks. Such a method typically involves the sentence first being tokenized and tagged for part-of-speech with the named entity or noun part of the sentence masked out. These generated questions are an exact match to the one in the reading passage except for the missing word or phrase. Although fill-in-the-blank questions are often used for reading comprehension, answering such questions correctly may not necessarily indicate comprehension if it is too easy to match the question to the relevant sentence in the passage. To improve fill in the blank type questions, a prior study used a supervised machine learning model to generate fill-in-the-blank type questions. The model paraphrases the sentence from the passage with the missing word by anonymizing entity markers BIBREF0.
Semi-automatic methods can also be use for question generation. Semi-automatic question generation involves human-generated templates in combination with querying the linked database repositories to complete the question BIBREF3, BIBREF4. The answer to the question is also extracted from the linked database. If the question is to be answered selecting from multiple choices, then distractors could also be selected from the database and randomly generated as incorrect choices for the answer. Another example of template-based question-and-answer generator using linked data is called Sherlock that has been shown to generate questions with varying levels of difficulty BIBREF5. However, designing a large set of high quality questions using semi-automatic question generation methods can be cognitively demanding and time-consuming. The types of questions created are also constrained to the templates. Generating a large dataset of questions is therefore cumbersome.
Other automatic question generators require human-made rules for the model to follow BIBREF1, BIBREF2. Educators are recruited to define the rules that will convert declarative sentences into interrogative questions BIBREF10, BIBREF11, BIBREF12. The rules generated requires the educator to possess both linguistic knowledge and subject knowledge. As with the template-based methods described above, this rules-based method can also be time-consuming and cognitively demanding. Moreover, the quality of the questions is limited by the quality of the handcrafted rules, and rules-based approaches are not scalable beyond human capacity.
Perhaps the most automated method reported thus far utilizes RNNs as sequence transduction (seq2seq) models to generate questions from sentences or passages BIBREF6, BIBREF7. In the most successful variant of RNNs, the Long Short-Term Memory (LSTM) networks, the model reads from left to right and includes an encoder and a decoder BIBREF13. The encoder takes the input and converts it to hidden vectors, while the decoder takes the vectors from the encoder and creates its own hidden vector to predict the next word based on the previous hidden vector BIBREF13. The hidden vector to the decoder stores all of the information about the context. The components in between the encoder and the decoder of the seq2seq model consists of attention, beam search, and bucketing. The attention mechanism takes the input to the decoder and allows the decoder to analyze the input sequence selectively. Beam search mechanism allows the decoder to select the highest probability word occurrence based on previous words. The bucketing mechanism allows the length of sequences to vary based on what we designate the bucket size to be. The decoder is then rewarded for correctly predicting the next word and penalized for incorrect predictions.
In this study, we developed a seq2seq model to automatically generate questions from Wikipedia passages. Our goal is to produce plausible questions with minimal human intervention that can assist educators in developing their quizzes and tests. Our model is based on transformers instead of RNNs. Transformers can train faster than RNNs because it is more parallelizable, working well with large and limited datasets BIBREF8, BIBREF9.
Transformers can also achieve better performance at a fraction of the training cost. Like the RNN approach, transformers have an encoder and a decoder. Transformers also incorporate the beam search and bucketing mechanisms. Unlike RNNs, transformers adopt multiple attention heads without requiring any recurrence, though recurrence can be added. The self-attention mechanism used is the scaled dot-product attention according to
where $d$ is the dimension (number of columns) of the input queries $Q$, keys $K$, and values $V$. By using self-attention, transformers can account for the whole sequence in its entirety and bidirectionally. For multi-head attention with $h$ heads that jointly attend to different representation subspaces at different positions given a sequence of length $m$ and the matrix $H\in \mathbf {R}^{m \times d}$, the result is
where the projections are learned parameter matrices $H^W_i,H^K_i,H^V_i\in \mathbf {R}^{(d \times d)/h}$ and $W^O\in \mathbf {R}^{(d \times d)}$.
Models utilizing transformers have achieved state-of-the-art performance on many NLP tasks, including question answering BIBREF14, BIBREF15. It is therefore interesting to study how transformers might be used to generate questions by training on the inverted SQuAD.
Experimental Methods ::: Data.
In this study, we used the Stanford Question Answering Dataset (SQuAD). SQuAD is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage BIBREF16. To generate the data for SQuAD, the top 10,000 English Wikipedia articles were ranked by Project Nayuki's Wikipedia's internal PageRanks as high-quality. Paragraphs that are longer than 500 characters were then extracted from the articles and partitioned into a training set (80%), a development set (10%), and a test set (10%). Only the former two datasets are publicly available. Crowdworkers were then employed to generate the questions and then the answers to the questions based on the extracted paragraphs. Another subset of crowdworkers were then asked to answer the questions that were generated given the corresponding passage to compare the model's answer with human generated answers and provide a benchmark for machine learning models.
Experimental Methods ::: Pre-processing.
We used the publicly available data from SQuAD to train our model to generate the questions. We used SQuAD's training and dev sets as our training and test sets, respectively. The reading passage, question, and answer data were pre-processed as described in the next section. For the test set, we provided the model with the pre-processed reading passages and answers that were never seen by the model. We inverted SQuAD by training a machine learning model to infer a question given a reading passage and an answer separated by a special token (i.e., `*') as input.
For pre-processing the reading passages, questions and answers, spaCy was used for named entity recognition and part-of-speech tagging BIBREF17, and WordPiece was used for tokenization BIBREF18. To ensure intelligible outputs, stop words are removed from the context passages and answers but not the questions. After lowercasing, tokenizing and removing the stop words, the named entities are then replaced with their respective tags to better allow the model to generalize and learn patterns in the data. We address a variety of named and numeric entities, including companies, locations, organizations and products, etc. (Table TABREF5). To account for multiple occurrences of a named entity type in the context passage, we also included an index after the named entity tag. As an example, consider the following context passage:
Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as "Super Bowl L"), so that the logo could prominently feature the Arabic numerals 50.
Applying the pre-processing described above, including the indexed-named entity tag replacement but not yet removing stop words, would produce
EVENT 0 DATE 0 was an NORP 0 football game to determine the champion of ORG 0 ( ORG 1 ) for DATE 1 . the NORP 0 football conference ( ORG 2 ) champion ORG 3 defeated ORG 4 ( ORG 5 ) champion ORG 6 24 – 10 to earn their ORDINAL 0 EVENT 0 title . the game was played on DATE 2 , at FAC 0 in FAC 1 at GPE 0 , GPE 1 . as this was the ORDINAL 1 EVENT 0 , the league emphasized the " golden anniversary " with various gold - themed initiatives , as well as temporarily suspend ##ing the tradition of naming each EVENT 0 game with LANGUAGE 0 nu ##meral ##s ( under which the game would have been known as " EVENT 0 l " ) , so that the logo could prominently feature the LANGUAGE 1 nu ##meral ##s DATE 0 .
Note that the index is separated from the named entity tag by a space and therefore interpreted by the model as a separate token so that named entities of similar types can be associated and generalized from without sacrificing the ability to distinguish between different entities of the same type. This spacing is necessary since we do not employ character-level embedding.
To generate the sub-word embeddings, we used the pre-trained WordPiece model from BERT, which has a 30,000 token vocabulary. WordPiece is a statistical technique used to segment text into tokens at the sub-word level. The vocabulary is initialized with all individual characters and iteratively aggregates the most frequently and likely combination of symbols into a vocabulary. The generation of WordPieces allows the model to capture the meaning of commonly occurring word segments, such as the root word suspend from suspending in the example context passage above. This dispenses with the need for the model to learn different variants or conjugations of a word.
Each input to the model comprised of a concatenation of the pre-processed answer and context passage. The most commonly agreed upon answer was chosen from among the three plausible answers for each question in SQuAD. Unlike prior question generation studies using SQuAD by isolating the sentence containing the answer, here we include the entire passage because the answers can depend on the context outside of the answer-containing sentence. Compared to RNNs used in prior studies, transformers allow us to more conveniently train and perform inference on longer sequence lengths. The model was developed with TensorFlow BIBREF19 and Tensor2Tensor BIBREF20, and then trained with an Nvidia Tesla T4 GPU for 1 million training steps.
To evaluate and analyze the results, the generated questions are post-processed by removing unnecessary spaces and consolidating the resulting WordPieces into a single coherent word. In other words, the results were de-tokenized using BERT's pre-trained WordPiece model.
Results and Discussion
To measure the model's question formulating ability, we calculated the word error rate (WER) between the generated questions and the corresponding questions from SQuAD. The SQuAD questions were used as the reference questions, as they ideally ask for the answers provided. WER is also known as the edit distance, which is a measure of similarity at the word level between generated questions and the target questions from SQuAD.
In essence, WER is the Levenshtein distance applied to a sequence of words. Differences between the two sequences can include sequence insertions, deletions and substitutions. WER can be calculated according to
where $S$ is the number of word substitutions, $D$ is the number of word deletions, $I$ is the number of word insertions, $C$ is the number of correct words, and $N$ is the total number of words in the reference sequence (i.e., $N = S + D + C$).
A low WER would indicate that a model-generated question is similar to the reference question, while a high WER means that the generated question differs significantly from the reference question. It should be noted that a high WER does not necessarily invalidate the generated question, as different questions can have the same answers, and there could be various ways of phrasing the same question. On the other hand, a situation with low WER of 1 could be due to the question missing a pertinent word to convey the correct idea. Despite these shortcomings in using WER as a metric of success, the WER can reflect our model's effectiveness in generating questions that are similar to those of SQuAD based on the given reading passage and answer. WER can be used for initial analyses that can lead to deeper insights as discussed further below.
Using the SQuAD dev set as our test set, our model generated 10,570 questions. The questions generated were mostly grammatically correct and related to the topic of the context passage. Fig. FIGREF7 shows the WER distribution, which has a mean of 9.66 words. 0.05% of the total questions generated by the model were an exact match to the corresponding SQuAD questions. For our discussion and analysis, we examine generated questions with various different WERs to gain insight into the quality of the questions. 9.94% of the model-generated questions have a WER less than or equal to 5, 56.38% have a WER between 6 and 10, 26.41% with a WER between 11 and 15, 5.81% with a WER between 16 and 20, and 1.45% of the generated questions have a WER greater than 21 (Fig. FIGREF7). In addition to the WER, the number of words in the questions (Figs. FIGREF10 and FIGREF11) and the first word of the questions (Figs. FIGREF8 and FIGREF9) are also considered below.
As shown in Fig. FIGREF7, our model was able to generate the exact questions as those corresponding to SQuAD with a WER of 0 for a small portion of instances. These types of questions tend to be relatively simpler and shorter, which are easier to learn as apparent in select examples from Table TABREF5. The model was also able to learn about and utilize synonyms, as apparent in the following examples where the WER is 1. Instead of using based as in the target question from SQuAD, “where is ORG 0 based?", the model used located to generate “where is ORG 0 located?". Although the term area can encompass many meanings, city and area can be synonymous depending on the context and therefore “what is the largest area of GPE 1?" generated by the model has the same meaning as “what is the largest city of GPE 1?" from SQuAD. The ability to capture relative meaning between words indicates that applying a pre-trained contextualized language model can improve performance in future studies.
Beyond exact matches, a low WER does not guarantee that a model-generated question has the same meaning as the target SQuAD question. Consider the following examples where the WERs are all 1, but the meanings differ between the generated and target questions. Sometimes the model produced a question in past tense when the target question from SQuAD is in present tense, e.g. “who was the president of GPE 3?" generated by the model versus “who is the president of GPE 3?" from SQuAD. In some cases, the named entity type matched, but the index did not, e.g. “where was ORG 2 located?" generated by the model versus “where was ORG 0 located?" from SQuAD. This case of swapping the index of a given named entity type could be a consequence of the pre-processing employed. Despite the different meanings, the questions generated are plausible questions that could be reasonably asked given the same context passages.
Since the questions generated by our model on the SQuAD dev set have an average WER of 9.66, we examined the questions with a WER of 8 to 10 (Table TABREF12) to see whether the questions are structured properly and whether they maintain the same meaning as the target SQuAD questions. As shown in Table TABREF12, it was challenging for the model to produce questions as complex as those from SQuAD, which resulted in a large WER. The average word count for the generated questions is 8 words per question (Fig. FIGREF10), while most of the questions from SQuAD are more detailed and complex with a total average word count of 12 (Fig. FIGREF11). The structure of the questions generated by the model are simpler and less detailed than those from SQuAD, but most are nevertheless plausible questions that are grammatically correct. The model-generated questions are relevant to the topic of the context passage and has the correct type of asking words to start the question. For example, although the target question from SQuAD is “by what means were scientists able to liquefy air?", the model can generate “what is the name of the process of water?". The questions have a WER of 9 and both can be interpreted as referring to the transformation process of water; more specifically, the condensation process. The model may have not sufficiently learned about the term liquefy but was still able to ask a similar question given limitations in the dataset used for training.
We next examined questions in the high WER regime, i.e. WER of 20 or more. Questions generated by the model still reflect the answer and context passage of interest. For example, given inputs for the target SQuAD question “who was responsible for the authorship of a paper published on real time - computations?", the model generated “what did PERSON 3 write?". Although both questions ask about authorship, the model's question is asking for a different type of answer, as indicated by the asking word of the questions (i.e., who vs. what).
To understand how the model chooses the asking word, we plot the first-word frequencies in descending order for SQuAD and model-generated questions (Figs. FIGREF8 and FIGREF9). Questions from SQuAD predominantly involve what, which reflects the first-word distribution of the training set as well. As the first-word is usually the asking word, training data imbalance most likely caused the model to be biased towards generating what questions. While the questions from SQuAD involve over 21 different words that initiate the questions (Figs. FIGREF9), our model only uses 10 different words to initiate questions (Figs. FIGREF8). The lack of diversity demonstrates that our model is not as well versed in asking questions as the crowdworkers in SQuAD and forms less elaborate types of questions. After all, our model generates an average of 8 words per question (Fig. FIGREF10), whereas the SQuAD questions have an average of 12 words per question (Fig. FIGREF11).
Conclusion and Future Work
We demonstrate that a transformer model can be trained to generate questions with correct grammar and relevancy to the context passage and answers provided. WER analyses was applied to diagnose shortcomings and guide future improvements. We observed that a low WER could be due to syntactic similarity but semantic disagreement, while two questions with syntactic divergence but similar meaning could result in a high WER. Since our results does not exhibit issues relating to contextual and syntactic roles of words within a generated question, other popular metrics (BLEU, ROUGE, F1-score, etc.) would lead to similar findings BIBREF21. Perhaps a better approach to evaluating question generation models is to apply state-of-the-art question answering models from SQuAD's leaderboard to measure how many answers agree.
To improve the model, more and balanced data can be provided to train the model to reduce the asking word bias. One method that can be used to obtain more data is through data augmentation by back-translation BIBREF22. The original SQuAD can be translated into another language such as French. The translated text could then be translated back into English to generate a variation of the context, question and answers that provide more training data for the model. Another data augmentation method is to paraphrase SQuAD's data to get another variation of the text BIBREF23, but one would have to ensure that pertinent information is not sacrificed in the summarization. The augmented data should then be sampled to reduce bias as much as possible. Other pre-processing variations can be considered. We tried including stopwords and removing the answer form the context passages but did not see improvements. Recent advancements in pre-trained bidirectionally contextualized language models can also be incorporated BIBREF14, BIBREF15, which would require a decoder to be added to the pre-trained model. | WER can reflect our model's effectiveness in generating questions that are similar to those of SQuAD, WER can be used for initial analyses |
5bcc12680cf2eda2dd13ab763c42314a26f2d993 | 5bcc12680cf2eda2dd13ab763c42314a26f2d993_0 | Q: What evaluation metrics were used in the experiment?
Text: Introduction
Video is the fastest growing medium to create and deliver information today. Consequentially, videos have been increasingly used as main data sources in many question answering problems BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF2, BIBREF5. These previous studies have mostly focused on factoid questions, each of which can be answered in a few words or phrases generated by understanding multimodal contents in a short video clip.
However, this problem definition of video question answering causes some practical limitations for the following reasons. First, factoid questions are just a small part of what people actually want to ask on video contents. Especially if a short video is given to users, most fragmentary facts within the scope of previous tasks can be easily perceived by themselves even before asking questions. Thus, video question answering is expected to provide answers to more complicated non-factoid questions beyond the simple facts. For example, those questions could be the ones asking about a how procedure as shown in Fig. FIGREF5, and the answers should contain all necessary steps to complete the task.
Accordingly, the answer format needs to also be improved towards more flexible ways than multiple choice BIBREF1, BIBREF2 or fill-in-the-blank questions BIBREF3, BIBREF4. Although open-ended video question answering BIBREF0, BIBREF2, BIBREF5 has been explored, it still aims to generate just a short word or phrase-level answer, which is not enough to cover various granularities of non-factoid question answering.
The other issue is that most videos with sufficient amount of information, which are likely to be asked, have much longer lengths than the video clips in the existing datasets. Therefore, the most relevant part of a whole video needs to be determined prior to each answer generation in practice. However, this localization task has been out of scope for previous studies.
In this work, we propose a new question answering problem for non-factoid questions on instructional videos. According to the nature of the media created for educational purposes, we assume that many answers already exist within the given video contents. Under this assumption, we formulate the problem as a localization task to specify the span of a video segment as the direct answer to a given video and a question, as illustrated in Figure FIGREF1.
The remainder of this paper is structured as follows. Section SECREF3 introduces TutorialVQA dataset as a case study of our proposed problem. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. Section SECREF4 presents the baseline models and their experiment details on the sentence-level prediction and video segment retrieval tasks on our dataset. Then, we discuss the experimental results in Section SECREF5 and conclude the paper in Section SECREF6.
Related Work
Most relevant to our proposed work is the reading comprehension task, which is a question answering task involving a piece of text such as a paragraph or article. Such datasets for the reading comprehension task, such as SQuAD BIBREF6 based on Wikipedia, TriviaQA BIBREF7 constructed from trivia questions with answer evidence from Wikipedia, or those from Hermann et al. based on CNN and Daily Mail articles BIBREF8 are factoid-based, meaning the answers typically involve a single entity. Differing from video transcripts, the structures of these data sources, namely paragraphs from Wikipedia and news sources, are typically straightforward since they are meant to be read. In contrast, video transcripts originate from spoken dialogue, which can verbose, unstructured, and disconnected. Furthermore, the answers in instructional video transcripts can be longer, spanning multiple sentences if the process is multi-step or even fragmented into multiple segments throughout the video.
Visual corpora in particular have proven extremely valuable to visual questions-answering tasks BIBREF9, the most similar being MovieQA BIBREF1 and VideoQA BIBREF0. Similar to how our data is generated from video tutorials, the MovieQA and VideoQA corpus is generated from movie scripts and news transcipts, respectively. MovieQA's answers have a shorter span than the answers collected in our corpus, because questions and answer pairs were generated after each paragraph in a movie's plot synopsis BIBREF1. The MovieQA dataset also contains specific annotated answers with incorrect examples for each question. In the VideoQA dataset, questions focus on a single entity, contrary to our instructional video dataset. Although not necessarily a visual question-answering task, the work proposed by BIBREF10 involved answering questions over transcript data. Contrary to our work, Gupta's dataset is not publically available and their examples only showcase factoid-style questions involving single entity answers.
BIBREF11 focus on aligning a set of instructions to a video of someone carrying out those instructions. In their task, they use the video transcript to represent the video, which they later augment with a visual cue detector on food entities. Their task focuses on procedure-based cooking videos, and contrary to our task is primarily a text alignment task. In our task we aim to answer questions-using the transcripts-on instructional-style videos, in which the answer can involve steps not mentioned in the question.
TutorialVQA Dataset
In this section, we introduce the TutorialVQA dataset and describe the data collection process .
TutorialVQA Dataset ::: Overview
Our dataset consists of 76 tutorial videos pertaining to an image editing software. All of the videos include spoken instructions which are transcribed and manually segmented into multiple segments. Specifically, we asked the annotators to manually divide each video into multiple segments such that each of the segments can serve as an answer to any question. For example, Fig. FIGREF1 shows example segments marked in red (each which are a complete unit as an answer span). Each sentence is associated with the starting and ending time-stamps, which can be used to access the relevant visual information.
The dataset contains 6,195 non-factoid QA pairs, where the answers are the segments that were manually annotated. Fig. FIGREF5 shows an example of the annotations. video_id can be used to retrieve the video information such as meta information and the transcripts. answer_start and answer_end denote the starting and ending sentence indexes of the answer span. Table. TABREF4 shows the statistics of our dataset, with each answer segment having on average about 6 sentences, showing that our answers are more verbose than those in previous factoid QA tasks.
TutorialVQA Dataset ::: Basis
We chose videos pertaining to an image editing software because of the complexity and variety of tasks involved. In these videos, a narrator is communicating an overall goal by utilizing an example. For example, in FIGREF1 the video pertains to combining multiple layers into one image. However, throughout the videos multiple subtasks are achieved, such as the opening of multiple images, the masking of images, and the placement of two images side-by-side. These subtasks involve multiple steps and are of interest to us in segmenting the videos. Each segment can be seen as a subtask within a larger video dictating an example. We thus chose these videos because of the amount of procedural information stored in each video for which the user may ask. Though there is only one domain, each video corresponds to a different overall goal.
TutorialVQA Dataset ::: Data Collection
We downloaded 76 videos from a tutorial website about an image editing program . Each video is pre-processed to provide the transcripts and the time-stamp information for each sentence in the transcript. We then used Amazon Mechanical Turk to collect the question-answer pairs . One naive way of collecting the data is to prepare a question list and then, for each question, ask the workers to find the relevant parts in the video. However, this approach is not feasible and error-prone because the videos are typically long and finding a relevant part from a long video is difficult. Doing so might also cause us to miss questions which were relevant to the video segment. Instead, we took a reversed approach. First, for each video, we manually identified the sentence spans that can serve as answers. These candidates are of various granularity and may overlap. The segments are also complete in that they encompass the beginning and end of a task. In total, we identified 408 segments from the 76 videos. Second we asked AMT workers to provide question annotations for the videos.
Our AMT experiment consisted of two parts. In the first part, we presented the workers with the video content of a segment. For each segment, we asked workers to generate questions that can be answered by the presented segment. We did not limit the number of questions a worker can input to a corresponding segment and encouraged them to input a diverse set of questions which the span can answer. Along with the questions, the workers were also required to provide a justification as to why they made their questions. We manually checked this justification to filter out the questions with poor quality by removing those questions which were unrelated to the video. One initial challenge worth mentioning is that at first some workers input questions they had about the video and not questions which the video could answer. This was solved by providing them with an unrelated example. The second part of the question collection framework consisted of a paraphrasing task. In this task we presented workers with the questions generated by the first task and asked them to write the questions differently while keeping the semantics the same. In this way, we expanded our question dataset. After filtering out the questions with low quality, we collected a total of 6,195 questions.
It is important to note the differences between our data collection process and the the query generation process employed in the Search and Hyperlinking Task at MediaEval BIBREF12. In the Search and Hyperlinking Task, 30 users were tasked to first browse the collection of videos, select interesting segments with start and end times, and then asked to conjecture questions that they would use on a search query to find the interesting video segments. This was done in order to emulate their thought-proces mechanism. While the nature of their task involves queries relating to the overall videos themselves, hence coming from a video's interestingness, our task involves users already being given a video and formulating questions where the answers themselves come from within a video. By presenting the same video segment to many users, we maintain a consistent set of video segments and extend the possibility to generate a diverse set of question for the same segment.
TutorialVQA Dataset ::: Dataset Details
Table TABREF12 presents some extracted sample questions from our dataset. The first column corresponds to an AMT generated question, while the second column corresponds to the video ID where the segment can be found. As can be seen in the first two rows, multiple types of questions can be answered within the same video (but different segments). The last two rows display questions which belong to the same segment but correspond to different properties of the same entity, 'crop tool'. Here we observe different types of questions, such as "why", "how", "what", and "where", and can see why the answers may involve multiple steps. Some questions that the worked paraphrased were in the "yes/no" style, however our answer segments then provide an explanation to these questions.
Each answer segment was extracted from an image editing tutorial video that involved multiple steps and procedures to produce a final image, which can partially be seen in FIGREF1. The average number of sentences per video was approximately 52, with the maximum number of sentences contained in a video being 187. The sub-tasks in the tutorial include segments (and thus answers) on editing parts of images, instructions on using certain tools, possible actions that can be performed on an image, and identifying the locations of tools and features, with the shortest and longest segment having a span of 1 and 37 sentences respectively, demonstrating the heterogeneity of the answer spans.
Baselines
Our video question answering task is novel and to our knowledge, no model has been designed specifically for this task. As a first step towards solving this problem, we evaluated the performance of state-of-the-art models developed for other QA tasks, including a sentence-level prediction task and two segment retrieval tasks. In this section, we report their results on the TutorialVQA dataset.
Baselines ::: Baseline1: Sentence-level prediction
Given a transcript (a sequence of sentences) and a question, Baseline1 predicts (starting sentence index, ending sentence index). The model is based on RaSor BIBREF13, which has been developed for the SQuAD QA task BIBREF6. RaSor concatenates the embedding vectors of the starting and the ending words to represent a span. Following this idea, Baseline1 represents a span of sentences by concatenating the vectors of the starting and ending sentences. The left diagram in Fig. FIGREF15 illustrates the Baseline1 model.
Model. The model takes two inputs, a transcript, $\lbrace s_1, s_2, ... s_n\rbrace $ where $s_i$ are individual sentences and a question, $q$. The output is the span scores, $y$, the scores over all possible spans. GLoVe BIBREF14 is used for the word representations in the transcript and the questions. We use two bi-LSTMs BIBREF15 to encode the transcript.
where n is the number of sentences . The output of Passage-level Encoding, $p$, is a sequence of vector, $p_i$, which represents the latent meaning of each sentence. Then, the model combines each pair of sentence embeddings ($p_i$, $p_j$) to generate a span embedding.
where [$\cdot $,$\cdot $] indicates the concatenation. Finally, we use a one-layer feed forward network to compute a score between each span and a question.
In training, we use cross-entropy as an objective function. In testing, the span with the highest score is picked as an answer.
Metrics. We use tolerance accuracy BIBREF16, which measures how far away the predicted span is from the gold standard span, as a metric. The rationale behind the metric is that, in practice, it suffices to recommend a rough span which contains the answer – a difference of a few seconds would not matter much to the user.
Specifically, the predicted span is counted as correct if $|pred_{start} - gt_{start}| + |pred_{end} - gt_{end}| <=$ $k$, where $pred_{start/end}$ and $gt_{start/end}$ indicate the indices of the predicted and ground-truth starting and ending sentences, respectively. We then measure the percentage of correctly predicted questions among the entire test questions.
Baselines ::: Baseline2: Segment retrieval
We also considered a simpler task by casting our problem as a retrieval task. Specifically, in addition to a plain transcript, we also provided the model with the segmentation information which was created during the data collection phrase (See Section. SECREF3). Note that each segments corresponds to a candidate answer. Then, the task is to pick the best segment for given a query. This task is easier than Baseline1's task in that the segmentation information is provided to the model. Unlike Baseline1, however, it is unable to return an answer span at various granularities. Baseline2 is based on the attentive LSTM BIBREF17, which has been developed for the InsuranceQA task. The right diagram in Fig. FIGREF15 illustrates the Baseline2 model.
Model. The two inputs, $s$ and $q$ represent the segment text and a question. The model first encodes the two inputs.
$h^s$ is then re-weighted using attention weights.
where $\odot $ denotes the element-wise multiplication operation. The final score is computed using a one-layer feed-forward network.
During training, the model requires negative samples. For each positive example, (question, ground-truth segment), all the other segments in the same transcript are used as negative samples. Cross entropy is used as an objective function.
Metrics. We used accuracy and MRR (Mean Reciprocal Ranking) as metrics. The accuracy is
We split the ground-truth dataset to train/dev/test into the ratio of 6/2/2. The resulting size is 3,718 (train), 1,238 (dev) and 1,239 qa pairs (test).
Baselines ::: Baseline3: Pipeline Segment retrieval
We construct a pipelined approach through another segment retrieval task, calculating the cosine similarities between the segment and question embeddings. In this task however, we want to test the accuracy of retrieving the segments given that we first retrieve the correct video from our 76 videos. First, we generate the TF-IDF embeddings for the whole video transcripts and questions. The next step involves retrieving the videos which have the lowest cosine distance between the video transcripts and question. We then filter and store the top ten videos, reducing the number of computations required in the next step. Finally, we calculate the cosine distances between the question and the segments which belong to the filtered top 10 videos, marking it as correct if found in these videos. While the task is less computationally expensive than the previous baseline, we do not learn the segment representations, as this task is a simple retrieval task based on TF-IDF embeddings.
Model. The first two inputs are are the question, q, and video transcript, v, encoded by their TF-IDF vectors: BIBREF18:
We then filter the top 10 video transcripts(out of 76) with the minimum cosine distance, and further compute the TF-IDF vectors for their segments, Stop10n, where n = 10. We repeat the process for the corresponding segments:
selecting the segment with the minimal cosine distance distance to the query.
Metrics. To evaluate our pipeline approach we use overall accuracy after filtering and accuracy given that the segment is in the top 10 videos. While the first metric is similar to SECREF17, the second can indicate if initially searching on the video space can be used to improve our selection:
Baselines ::: Results
Tables TABREF20, TABREF21, TABREF22 show the results. First, the tables show that the two first baselines under-perform for our task. Even with a tolerance window of 6, Baseline1 merely achieves an accuracy of .14. Baseline2, despite being a simpler task, has only an accuracy of .23. Second, while we originally hypothesized that the segment selection task should be easier than the sentence prediction task, Table TABREF21 shows that the task is also challenging. One possible reason is that the segments contained within the same transcript have similar contents, due to the composition of the overall task in each video, and differentiating among them may require a more sophisticated model than just using a sequence model for segment representation. Table TABREF22 shows the accuracy of retrieving the correct segment, for baseline both overall and given that the video selected is within the top 10 videos. While the overall accuracy is only .16, by reducing the search space to 10 relevant videos our accuracy increases to 0.6385. In future iterations, it may then be useful to find better approaches in filtering large paragraphs of text before predicting the correct segment.
Discussion and Future Work
We performed an error analysis on Baseline1's results. We first observe that, in 92% of the errors, the predicted span and the ground-truth overlap. Furthermore, in 56% of the errors, the predicted spans are a subset or superset of the ground-truth spans. This indicates that the model finds the rough answer regions but fails to locate the precise boundaries. To address this issue, we plan on exploring the Pointer-network BIBREF19, which finds an answer span by selecting the boundary sentences. Unlike Baseline1 which avoids an explicit segmentation step, the Pointer-network can explicitly model which sentences are likely to be a boundary sentence. Moreover, the search space of the spans in the Pointer-network is $2n$ where $n$ is the number of sentences, because it selects only two boundary sentences. Note that the search space of Baseline1 is $n^2$. A much smaller search space might improve the accuracy by making the model consider fewer candidates.
In future work, we also plan to use multi-modal information. While our baselines only used the transcript, complementing the narratives with the visual information may improve the performance, similarly to the text alignment task in BIBREF11.
Conclusion
We have described the collection, analysis, and baseline results of TutorialVQA, a new type of dataset used to find answer spans in tutorial videos. Our data collection method for question-answer pairs on instructional video can be further adopted to other domains where the answers involve multiple steps and are part of an overall goal, such as cooking or educational videos. We have shown that current baseline models for finding the answer spans are not sufficient for achieving high accuracy and hope that by releasing this new dataset and task, more appropriate question answering models can be developed for question answering on instructional videos. | For sentence-level prediction they used tolerance accuracy, for segment retrieval accuracy and MRR and for the pipeline approach they used overall accuracy |
7a53668cf2da4557735aec0ecf5f29868584ebcf | 7a53668cf2da4557735aec0ecf5f29868584ebcf_0 | Q: What kind of instructional videos are in the dataset?
Text: Introduction
Video is the fastest growing medium to create and deliver information today. Consequentially, videos have been increasingly used as main data sources in many question answering problems BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF2, BIBREF5. These previous studies have mostly focused on factoid questions, each of which can be answered in a few words or phrases generated by understanding multimodal contents in a short video clip.
However, this problem definition of video question answering causes some practical limitations for the following reasons. First, factoid questions are just a small part of what people actually want to ask on video contents. Especially if a short video is given to users, most fragmentary facts within the scope of previous tasks can be easily perceived by themselves even before asking questions. Thus, video question answering is expected to provide answers to more complicated non-factoid questions beyond the simple facts. For example, those questions could be the ones asking about a how procedure as shown in Fig. FIGREF5, and the answers should contain all necessary steps to complete the task.
Accordingly, the answer format needs to also be improved towards more flexible ways than multiple choice BIBREF1, BIBREF2 or fill-in-the-blank questions BIBREF3, BIBREF4. Although open-ended video question answering BIBREF0, BIBREF2, BIBREF5 has been explored, it still aims to generate just a short word or phrase-level answer, which is not enough to cover various granularities of non-factoid question answering.
The other issue is that most videos with sufficient amount of information, which are likely to be asked, have much longer lengths than the video clips in the existing datasets. Therefore, the most relevant part of a whole video needs to be determined prior to each answer generation in practice. However, this localization task has been out of scope for previous studies.
In this work, we propose a new question answering problem for non-factoid questions on instructional videos. According to the nature of the media created for educational purposes, we assume that many answers already exist within the given video contents. Under this assumption, we formulate the problem as a localization task to specify the span of a video segment as the direct answer to a given video and a question, as illustrated in Figure FIGREF1.
The remainder of this paper is structured as follows. Section SECREF3 introduces TutorialVQA dataset as a case study of our proposed problem. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. Section SECREF4 presents the baseline models and their experiment details on the sentence-level prediction and video segment retrieval tasks on our dataset. Then, we discuss the experimental results in Section SECREF5 and conclude the paper in Section SECREF6.
Related Work
Most relevant to our proposed work is the reading comprehension task, which is a question answering task involving a piece of text such as a paragraph or article. Such datasets for the reading comprehension task, such as SQuAD BIBREF6 based on Wikipedia, TriviaQA BIBREF7 constructed from trivia questions with answer evidence from Wikipedia, or those from Hermann et al. based on CNN and Daily Mail articles BIBREF8 are factoid-based, meaning the answers typically involve a single entity. Differing from video transcripts, the structures of these data sources, namely paragraphs from Wikipedia and news sources, are typically straightforward since they are meant to be read. In contrast, video transcripts originate from spoken dialogue, which can verbose, unstructured, and disconnected. Furthermore, the answers in instructional video transcripts can be longer, spanning multiple sentences if the process is multi-step or even fragmented into multiple segments throughout the video.
Visual corpora in particular have proven extremely valuable to visual questions-answering tasks BIBREF9, the most similar being MovieQA BIBREF1 and VideoQA BIBREF0. Similar to how our data is generated from video tutorials, the MovieQA and VideoQA corpus is generated from movie scripts and news transcipts, respectively. MovieQA's answers have a shorter span than the answers collected in our corpus, because questions and answer pairs were generated after each paragraph in a movie's plot synopsis BIBREF1. The MovieQA dataset also contains specific annotated answers with incorrect examples for each question. In the VideoQA dataset, questions focus on a single entity, contrary to our instructional video dataset. Although not necessarily a visual question-answering task, the work proposed by BIBREF10 involved answering questions over transcript data. Contrary to our work, Gupta's dataset is not publically available and their examples only showcase factoid-style questions involving single entity answers.
BIBREF11 focus on aligning a set of instructions to a video of someone carrying out those instructions. In their task, they use the video transcript to represent the video, which they later augment with a visual cue detector on food entities. Their task focuses on procedure-based cooking videos, and contrary to our task is primarily a text alignment task. In our task we aim to answer questions-using the transcripts-on instructional-style videos, in which the answer can involve steps not mentioned in the question.
TutorialVQA Dataset
In this section, we introduce the TutorialVQA dataset and describe the data collection process .
TutorialVQA Dataset ::: Overview
Our dataset consists of 76 tutorial videos pertaining to an image editing software. All of the videos include spoken instructions which are transcribed and manually segmented into multiple segments. Specifically, we asked the annotators to manually divide each video into multiple segments such that each of the segments can serve as an answer to any question. For example, Fig. FIGREF1 shows example segments marked in red (each which are a complete unit as an answer span). Each sentence is associated with the starting and ending time-stamps, which can be used to access the relevant visual information.
The dataset contains 6,195 non-factoid QA pairs, where the answers are the segments that were manually annotated. Fig. FIGREF5 shows an example of the annotations. video_id can be used to retrieve the video information such as meta information and the transcripts. answer_start and answer_end denote the starting and ending sentence indexes of the answer span. Table. TABREF4 shows the statistics of our dataset, with each answer segment having on average about 6 sentences, showing that our answers are more verbose than those in previous factoid QA tasks.
TutorialVQA Dataset ::: Basis
We chose videos pertaining to an image editing software because of the complexity and variety of tasks involved. In these videos, a narrator is communicating an overall goal by utilizing an example. For example, in FIGREF1 the video pertains to combining multiple layers into one image. However, throughout the videos multiple subtasks are achieved, such as the opening of multiple images, the masking of images, and the placement of two images side-by-side. These subtasks involve multiple steps and are of interest to us in segmenting the videos. Each segment can be seen as a subtask within a larger video dictating an example. We thus chose these videos because of the amount of procedural information stored in each video for which the user may ask. Though there is only one domain, each video corresponds to a different overall goal.
TutorialVQA Dataset ::: Data Collection
We downloaded 76 videos from a tutorial website about an image editing program . Each video is pre-processed to provide the transcripts and the time-stamp information for each sentence in the transcript. We then used Amazon Mechanical Turk to collect the question-answer pairs . One naive way of collecting the data is to prepare a question list and then, for each question, ask the workers to find the relevant parts in the video. However, this approach is not feasible and error-prone because the videos are typically long and finding a relevant part from a long video is difficult. Doing so might also cause us to miss questions which were relevant to the video segment. Instead, we took a reversed approach. First, for each video, we manually identified the sentence spans that can serve as answers. These candidates are of various granularity and may overlap. The segments are also complete in that they encompass the beginning and end of a task. In total, we identified 408 segments from the 76 videos. Second we asked AMT workers to provide question annotations for the videos.
Our AMT experiment consisted of two parts. In the first part, we presented the workers with the video content of a segment. For each segment, we asked workers to generate questions that can be answered by the presented segment. We did not limit the number of questions a worker can input to a corresponding segment and encouraged them to input a diverse set of questions which the span can answer. Along with the questions, the workers were also required to provide a justification as to why they made their questions. We manually checked this justification to filter out the questions with poor quality by removing those questions which were unrelated to the video. One initial challenge worth mentioning is that at first some workers input questions they had about the video and not questions which the video could answer. This was solved by providing them with an unrelated example. The second part of the question collection framework consisted of a paraphrasing task. In this task we presented workers with the questions generated by the first task and asked them to write the questions differently while keeping the semantics the same. In this way, we expanded our question dataset. After filtering out the questions with low quality, we collected a total of 6,195 questions.
It is important to note the differences between our data collection process and the the query generation process employed in the Search and Hyperlinking Task at MediaEval BIBREF12. In the Search and Hyperlinking Task, 30 users were tasked to first browse the collection of videos, select interesting segments with start and end times, and then asked to conjecture questions that they would use on a search query to find the interesting video segments. This was done in order to emulate their thought-proces mechanism. While the nature of their task involves queries relating to the overall videos themselves, hence coming from a video's interestingness, our task involves users already being given a video and formulating questions where the answers themselves come from within a video. By presenting the same video segment to many users, we maintain a consistent set of video segments and extend the possibility to generate a diverse set of question for the same segment.
TutorialVQA Dataset ::: Dataset Details
Table TABREF12 presents some extracted sample questions from our dataset. The first column corresponds to an AMT generated question, while the second column corresponds to the video ID where the segment can be found. As can be seen in the first two rows, multiple types of questions can be answered within the same video (but different segments). The last two rows display questions which belong to the same segment but correspond to different properties of the same entity, 'crop tool'. Here we observe different types of questions, such as "why", "how", "what", and "where", and can see why the answers may involve multiple steps. Some questions that the worked paraphrased were in the "yes/no" style, however our answer segments then provide an explanation to these questions.
Each answer segment was extracted from an image editing tutorial video that involved multiple steps and procedures to produce a final image, which can partially be seen in FIGREF1. The average number of sentences per video was approximately 52, with the maximum number of sentences contained in a video being 187. The sub-tasks in the tutorial include segments (and thus answers) on editing parts of images, instructions on using certain tools, possible actions that can be performed on an image, and identifying the locations of tools and features, with the shortest and longest segment having a span of 1 and 37 sentences respectively, demonstrating the heterogeneity of the answer spans.
Baselines
Our video question answering task is novel and to our knowledge, no model has been designed specifically for this task. As a first step towards solving this problem, we evaluated the performance of state-of-the-art models developed for other QA tasks, including a sentence-level prediction task and two segment retrieval tasks. In this section, we report their results on the TutorialVQA dataset.
Baselines ::: Baseline1: Sentence-level prediction
Given a transcript (a sequence of sentences) and a question, Baseline1 predicts (starting sentence index, ending sentence index). The model is based on RaSor BIBREF13, which has been developed for the SQuAD QA task BIBREF6. RaSor concatenates the embedding vectors of the starting and the ending words to represent a span. Following this idea, Baseline1 represents a span of sentences by concatenating the vectors of the starting and ending sentences. The left diagram in Fig. FIGREF15 illustrates the Baseline1 model.
Model. The model takes two inputs, a transcript, $\lbrace s_1, s_2, ... s_n\rbrace $ where $s_i$ are individual sentences and a question, $q$. The output is the span scores, $y$, the scores over all possible spans. GLoVe BIBREF14 is used for the word representations in the transcript and the questions. We use two bi-LSTMs BIBREF15 to encode the transcript.
where n is the number of sentences . The output of Passage-level Encoding, $p$, is a sequence of vector, $p_i$, which represents the latent meaning of each sentence. Then, the model combines each pair of sentence embeddings ($p_i$, $p_j$) to generate a span embedding.
where [$\cdot $,$\cdot $] indicates the concatenation. Finally, we use a one-layer feed forward network to compute a score between each span and a question.
In training, we use cross-entropy as an objective function. In testing, the span with the highest score is picked as an answer.
Metrics. We use tolerance accuracy BIBREF16, which measures how far away the predicted span is from the gold standard span, as a metric. The rationale behind the metric is that, in practice, it suffices to recommend a rough span which contains the answer – a difference of a few seconds would not matter much to the user.
Specifically, the predicted span is counted as correct if $|pred_{start} - gt_{start}| + |pred_{end} - gt_{end}| <=$ $k$, where $pred_{start/end}$ and $gt_{start/end}$ indicate the indices of the predicted and ground-truth starting and ending sentences, respectively. We then measure the percentage of correctly predicted questions among the entire test questions.
Baselines ::: Baseline2: Segment retrieval
We also considered a simpler task by casting our problem as a retrieval task. Specifically, in addition to a plain transcript, we also provided the model with the segmentation information which was created during the data collection phrase (See Section. SECREF3). Note that each segments corresponds to a candidate answer. Then, the task is to pick the best segment for given a query. This task is easier than Baseline1's task in that the segmentation information is provided to the model. Unlike Baseline1, however, it is unable to return an answer span at various granularities. Baseline2 is based on the attentive LSTM BIBREF17, which has been developed for the InsuranceQA task. The right diagram in Fig. FIGREF15 illustrates the Baseline2 model.
Model. The two inputs, $s$ and $q$ represent the segment text and a question. The model first encodes the two inputs.
$h^s$ is then re-weighted using attention weights.
where $\odot $ denotes the element-wise multiplication operation. The final score is computed using a one-layer feed-forward network.
During training, the model requires negative samples. For each positive example, (question, ground-truth segment), all the other segments in the same transcript are used as negative samples. Cross entropy is used as an objective function.
Metrics. We used accuracy and MRR (Mean Reciprocal Ranking) as metrics. The accuracy is
We split the ground-truth dataset to train/dev/test into the ratio of 6/2/2. The resulting size is 3,718 (train), 1,238 (dev) and 1,239 qa pairs (test).
Baselines ::: Baseline3: Pipeline Segment retrieval
We construct a pipelined approach through another segment retrieval task, calculating the cosine similarities between the segment and question embeddings. In this task however, we want to test the accuracy of retrieving the segments given that we first retrieve the correct video from our 76 videos. First, we generate the TF-IDF embeddings for the whole video transcripts and questions. The next step involves retrieving the videos which have the lowest cosine distance between the video transcripts and question. We then filter and store the top ten videos, reducing the number of computations required in the next step. Finally, we calculate the cosine distances between the question and the segments which belong to the filtered top 10 videos, marking it as correct if found in these videos. While the task is less computationally expensive than the previous baseline, we do not learn the segment representations, as this task is a simple retrieval task based on TF-IDF embeddings.
Model. The first two inputs are are the question, q, and video transcript, v, encoded by their TF-IDF vectors: BIBREF18:
We then filter the top 10 video transcripts(out of 76) with the minimum cosine distance, and further compute the TF-IDF vectors for their segments, Stop10n, where n = 10. We repeat the process for the corresponding segments:
selecting the segment with the minimal cosine distance distance to the query.
Metrics. To evaluate our pipeline approach we use overall accuracy after filtering and accuracy given that the segment is in the top 10 videos. While the first metric is similar to SECREF17, the second can indicate if initially searching on the video space can be used to improve our selection:
Baselines ::: Results
Tables TABREF20, TABREF21, TABREF22 show the results. First, the tables show that the two first baselines under-perform for our task. Even with a tolerance window of 6, Baseline1 merely achieves an accuracy of .14. Baseline2, despite being a simpler task, has only an accuracy of .23. Second, while we originally hypothesized that the segment selection task should be easier than the sentence prediction task, Table TABREF21 shows that the task is also challenging. One possible reason is that the segments contained within the same transcript have similar contents, due to the composition of the overall task in each video, and differentiating among them may require a more sophisticated model than just using a sequence model for segment representation. Table TABREF22 shows the accuracy of retrieving the correct segment, for baseline both overall and given that the video selected is within the top 10 videos. While the overall accuracy is only .16, by reducing the search space to 10 relevant videos our accuracy increases to 0.6385. In future iterations, it may then be useful to find better approaches in filtering large paragraphs of text before predicting the correct segment.
Discussion and Future Work
We performed an error analysis on Baseline1's results. We first observe that, in 92% of the errors, the predicted span and the ground-truth overlap. Furthermore, in 56% of the errors, the predicted spans are a subset or superset of the ground-truth spans. This indicates that the model finds the rough answer regions but fails to locate the precise boundaries. To address this issue, we plan on exploring the Pointer-network BIBREF19, which finds an answer span by selecting the boundary sentences. Unlike Baseline1 which avoids an explicit segmentation step, the Pointer-network can explicitly model which sentences are likely to be a boundary sentence. Moreover, the search space of the spans in the Pointer-network is $2n$ where $n$ is the number of sentences, because it selects only two boundary sentences. Note that the search space of Baseline1 is $n^2$. A much smaller search space might improve the accuracy by making the model consider fewer candidates.
In future work, we also plan to use multi-modal information. While our baselines only used the transcript, complementing the narratives with the visual information may improve the performance, similarly to the text alignment task in BIBREF11.
Conclusion
We have described the collection, analysis, and baseline results of TutorialVQA, a new type of dataset used to find answer spans in tutorial videos. Our data collection method for question-answer pairs on instructional video can be further adopted to other domains where the answers involve multiple steps and are part of an overall goal, such as cooking or educational videos. We have shown that current baseline models for finding the answer spans are not sufficient for achieving high accuracy and hope that by releasing this new dataset and task, more appropriate question answering models can be developed for question answering on instructional videos. | tutorial videos for a photo-editing software |
8051927f914d730dfc61b2dc7a8580707b462e56 | 8051927f914d730dfc61b2dc7a8580707b462e56_0 | Q: What baseline algorithms were presented?
Text: Introduction
Video is the fastest growing medium to create and deliver information today. Consequentially, videos have been increasingly used as main data sources in many question answering problems BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF2, BIBREF5. These previous studies have mostly focused on factoid questions, each of which can be answered in a few words or phrases generated by understanding multimodal contents in a short video clip.
However, this problem definition of video question answering causes some practical limitations for the following reasons. First, factoid questions are just a small part of what people actually want to ask on video contents. Especially if a short video is given to users, most fragmentary facts within the scope of previous tasks can be easily perceived by themselves even before asking questions. Thus, video question answering is expected to provide answers to more complicated non-factoid questions beyond the simple facts. For example, those questions could be the ones asking about a how procedure as shown in Fig. FIGREF5, and the answers should contain all necessary steps to complete the task.
Accordingly, the answer format needs to also be improved towards more flexible ways than multiple choice BIBREF1, BIBREF2 or fill-in-the-blank questions BIBREF3, BIBREF4. Although open-ended video question answering BIBREF0, BIBREF2, BIBREF5 has been explored, it still aims to generate just a short word or phrase-level answer, which is not enough to cover various granularities of non-factoid question answering.
The other issue is that most videos with sufficient amount of information, which are likely to be asked, have much longer lengths than the video clips in the existing datasets. Therefore, the most relevant part of a whole video needs to be determined prior to each answer generation in practice. However, this localization task has been out of scope for previous studies.
In this work, we propose a new question answering problem for non-factoid questions on instructional videos. According to the nature of the media created for educational purposes, we assume that many answers already exist within the given video contents. Under this assumption, we formulate the problem as a localization task to specify the span of a video segment as the direct answer to a given video and a question, as illustrated in Figure FIGREF1.
The remainder of this paper is structured as follows. Section SECREF3 introduces TutorialVQA dataset as a case study of our proposed problem. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. Section SECREF4 presents the baseline models and their experiment details on the sentence-level prediction and video segment retrieval tasks on our dataset. Then, we discuss the experimental results in Section SECREF5 and conclude the paper in Section SECREF6.
Related Work
Most relevant to our proposed work is the reading comprehension task, which is a question answering task involving a piece of text such as a paragraph or article. Such datasets for the reading comprehension task, such as SQuAD BIBREF6 based on Wikipedia, TriviaQA BIBREF7 constructed from trivia questions with answer evidence from Wikipedia, or those from Hermann et al. based on CNN and Daily Mail articles BIBREF8 are factoid-based, meaning the answers typically involve a single entity. Differing from video transcripts, the structures of these data sources, namely paragraphs from Wikipedia and news sources, are typically straightforward since they are meant to be read. In contrast, video transcripts originate from spoken dialogue, which can verbose, unstructured, and disconnected. Furthermore, the answers in instructional video transcripts can be longer, spanning multiple sentences if the process is multi-step or even fragmented into multiple segments throughout the video.
Visual corpora in particular have proven extremely valuable to visual questions-answering tasks BIBREF9, the most similar being MovieQA BIBREF1 and VideoQA BIBREF0. Similar to how our data is generated from video tutorials, the MovieQA and VideoQA corpus is generated from movie scripts and news transcipts, respectively. MovieQA's answers have a shorter span than the answers collected in our corpus, because questions and answer pairs were generated after each paragraph in a movie's plot synopsis BIBREF1. The MovieQA dataset also contains specific annotated answers with incorrect examples for each question. In the VideoQA dataset, questions focus on a single entity, contrary to our instructional video dataset. Although not necessarily a visual question-answering task, the work proposed by BIBREF10 involved answering questions over transcript data. Contrary to our work, Gupta's dataset is not publically available and their examples only showcase factoid-style questions involving single entity answers.
BIBREF11 focus on aligning a set of instructions to a video of someone carrying out those instructions. In their task, they use the video transcript to represent the video, which they later augment with a visual cue detector on food entities. Their task focuses on procedure-based cooking videos, and contrary to our task is primarily a text alignment task. In our task we aim to answer questions-using the transcripts-on instructional-style videos, in which the answer can involve steps not mentioned in the question.
TutorialVQA Dataset
In this section, we introduce the TutorialVQA dataset and describe the data collection process .
TutorialVQA Dataset ::: Overview
Our dataset consists of 76 tutorial videos pertaining to an image editing software. All of the videos include spoken instructions which are transcribed and manually segmented into multiple segments. Specifically, we asked the annotators to manually divide each video into multiple segments such that each of the segments can serve as an answer to any question. For example, Fig. FIGREF1 shows example segments marked in red (each which are a complete unit as an answer span). Each sentence is associated with the starting and ending time-stamps, which can be used to access the relevant visual information.
The dataset contains 6,195 non-factoid QA pairs, where the answers are the segments that were manually annotated. Fig. FIGREF5 shows an example of the annotations. video_id can be used to retrieve the video information such as meta information and the transcripts. answer_start and answer_end denote the starting and ending sentence indexes of the answer span. Table. TABREF4 shows the statistics of our dataset, with each answer segment having on average about 6 sentences, showing that our answers are more verbose than those in previous factoid QA tasks.
TutorialVQA Dataset ::: Basis
We chose videos pertaining to an image editing software because of the complexity and variety of tasks involved. In these videos, a narrator is communicating an overall goal by utilizing an example. For example, in FIGREF1 the video pertains to combining multiple layers into one image. However, throughout the videos multiple subtasks are achieved, such as the opening of multiple images, the masking of images, and the placement of two images side-by-side. These subtasks involve multiple steps and are of interest to us in segmenting the videos. Each segment can be seen as a subtask within a larger video dictating an example. We thus chose these videos because of the amount of procedural information stored in each video for which the user may ask. Though there is only one domain, each video corresponds to a different overall goal.
TutorialVQA Dataset ::: Data Collection
We downloaded 76 videos from a tutorial website about an image editing program . Each video is pre-processed to provide the transcripts and the time-stamp information for each sentence in the transcript. We then used Amazon Mechanical Turk to collect the question-answer pairs . One naive way of collecting the data is to prepare a question list and then, for each question, ask the workers to find the relevant parts in the video. However, this approach is not feasible and error-prone because the videos are typically long and finding a relevant part from a long video is difficult. Doing so might also cause us to miss questions which were relevant to the video segment. Instead, we took a reversed approach. First, for each video, we manually identified the sentence spans that can serve as answers. These candidates are of various granularity and may overlap. The segments are also complete in that they encompass the beginning and end of a task. In total, we identified 408 segments from the 76 videos. Second we asked AMT workers to provide question annotations for the videos.
Our AMT experiment consisted of two parts. In the first part, we presented the workers with the video content of a segment. For each segment, we asked workers to generate questions that can be answered by the presented segment. We did not limit the number of questions a worker can input to a corresponding segment and encouraged them to input a diverse set of questions which the span can answer. Along with the questions, the workers were also required to provide a justification as to why they made their questions. We manually checked this justification to filter out the questions with poor quality by removing those questions which were unrelated to the video. One initial challenge worth mentioning is that at first some workers input questions they had about the video and not questions which the video could answer. This was solved by providing them with an unrelated example. The second part of the question collection framework consisted of a paraphrasing task. In this task we presented workers with the questions generated by the first task and asked them to write the questions differently while keeping the semantics the same. In this way, we expanded our question dataset. After filtering out the questions with low quality, we collected a total of 6,195 questions.
It is important to note the differences between our data collection process and the the query generation process employed in the Search and Hyperlinking Task at MediaEval BIBREF12. In the Search and Hyperlinking Task, 30 users were tasked to first browse the collection of videos, select interesting segments with start and end times, and then asked to conjecture questions that they would use on a search query to find the interesting video segments. This was done in order to emulate their thought-proces mechanism. While the nature of their task involves queries relating to the overall videos themselves, hence coming from a video's interestingness, our task involves users already being given a video and formulating questions where the answers themselves come from within a video. By presenting the same video segment to many users, we maintain a consistent set of video segments and extend the possibility to generate a diverse set of question for the same segment.
TutorialVQA Dataset ::: Dataset Details
Table TABREF12 presents some extracted sample questions from our dataset. The first column corresponds to an AMT generated question, while the second column corresponds to the video ID where the segment can be found. As can be seen in the first two rows, multiple types of questions can be answered within the same video (but different segments). The last two rows display questions which belong to the same segment but correspond to different properties of the same entity, 'crop tool'. Here we observe different types of questions, such as "why", "how", "what", and "where", and can see why the answers may involve multiple steps. Some questions that the worked paraphrased were in the "yes/no" style, however our answer segments then provide an explanation to these questions.
Each answer segment was extracted from an image editing tutorial video that involved multiple steps and procedures to produce a final image, which can partially be seen in FIGREF1. The average number of sentences per video was approximately 52, with the maximum number of sentences contained in a video being 187. The sub-tasks in the tutorial include segments (and thus answers) on editing parts of images, instructions on using certain tools, possible actions that can be performed on an image, and identifying the locations of tools and features, with the shortest and longest segment having a span of 1 and 37 sentences respectively, demonstrating the heterogeneity of the answer spans.
Baselines
Our video question answering task is novel and to our knowledge, no model has been designed specifically for this task. As a first step towards solving this problem, we evaluated the performance of state-of-the-art models developed for other QA tasks, including a sentence-level prediction task and two segment retrieval tasks. In this section, we report their results on the TutorialVQA dataset.
Baselines ::: Baseline1: Sentence-level prediction
Given a transcript (a sequence of sentences) and a question, Baseline1 predicts (starting sentence index, ending sentence index). The model is based on RaSor BIBREF13, which has been developed for the SQuAD QA task BIBREF6. RaSor concatenates the embedding vectors of the starting and the ending words to represent a span. Following this idea, Baseline1 represents a span of sentences by concatenating the vectors of the starting and ending sentences. The left diagram in Fig. FIGREF15 illustrates the Baseline1 model.
Model. The model takes two inputs, a transcript, $\lbrace s_1, s_2, ... s_n\rbrace $ where $s_i$ are individual sentences and a question, $q$. The output is the span scores, $y$, the scores over all possible spans. GLoVe BIBREF14 is used for the word representations in the transcript and the questions. We use two bi-LSTMs BIBREF15 to encode the transcript.
where n is the number of sentences . The output of Passage-level Encoding, $p$, is a sequence of vector, $p_i$, which represents the latent meaning of each sentence. Then, the model combines each pair of sentence embeddings ($p_i$, $p_j$) to generate a span embedding.
where [$\cdot $,$\cdot $] indicates the concatenation. Finally, we use a one-layer feed forward network to compute a score between each span and a question.
In training, we use cross-entropy as an objective function. In testing, the span with the highest score is picked as an answer.
Metrics. We use tolerance accuracy BIBREF16, which measures how far away the predicted span is from the gold standard span, as a metric. The rationale behind the metric is that, in practice, it suffices to recommend a rough span which contains the answer – a difference of a few seconds would not matter much to the user.
Specifically, the predicted span is counted as correct if $|pred_{start} - gt_{start}| + |pred_{end} - gt_{end}| <=$ $k$, where $pred_{start/end}$ and $gt_{start/end}$ indicate the indices of the predicted and ground-truth starting and ending sentences, respectively. We then measure the percentage of correctly predicted questions among the entire test questions.
Baselines ::: Baseline2: Segment retrieval
We also considered a simpler task by casting our problem as a retrieval task. Specifically, in addition to a plain transcript, we also provided the model with the segmentation information which was created during the data collection phrase (See Section. SECREF3). Note that each segments corresponds to a candidate answer. Then, the task is to pick the best segment for given a query. This task is easier than Baseline1's task in that the segmentation information is provided to the model. Unlike Baseline1, however, it is unable to return an answer span at various granularities. Baseline2 is based on the attentive LSTM BIBREF17, which has been developed for the InsuranceQA task. The right diagram in Fig. FIGREF15 illustrates the Baseline2 model.
Model. The two inputs, $s$ and $q$ represent the segment text and a question. The model first encodes the two inputs.
$h^s$ is then re-weighted using attention weights.
where $\odot $ denotes the element-wise multiplication operation. The final score is computed using a one-layer feed-forward network.
During training, the model requires negative samples. For each positive example, (question, ground-truth segment), all the other segments in the same transcript are used as negative samples. Cross entropy is used as an objective function.
Metrics. We used accuracy and MRR (Mean Reciprocal Ranking) as metrics. The accuracy is
We split the ground-truth dataset to train/dev/test into the ratio of 6/2/2. The resulting size is 3,718 (train), 1,238 (dev) and 1,239 qa pairs (test).
Baselines ::: Baseline3: Pipeline Segment retrieval
We construct a pipelined approach through another segment retrieval task, calculating the cosine similarities between the segment and question embeddings. In this task however, we want to test the accuracy of retrieving the segments given that we first retrieve the correct video from our 76 videos. First, we generate the TF-IDF embeddings for the whole video transcripts and questions. The next step involves retrieving the videos which have the lowest cosine distance between the video transcripts and question. We then filter and store the top ten videos, reducing the number of computations required in the next step. Finally, we calculate the cosine distances between the question and the segments which belong to the filtered top 10 videos, marking it as correct if found in these videos. While the task is less computationally expensive than the previous baseline, we do not learn the segment representations, as this task is a simple retrieval task based on TF-IDF embeddings.
Model. The first two inputs are are the question, q, and video transcript, v, encoded by their TF-IDF vectors: BIBREF18:
We then filter the top 10 video transcripts(out of 76) with the minimum cosine distance, and further compute the TF-IDF vectors for their segments, Stop10n, where n = 10. We repeat the process for the corresponding segments:
selecting the segment with the minimal cosine distance distance to the query.
Metrics. To evaluate our pipeline approach we use overall accuracy after filtering and accuracy given that the segment is in the top 10 videos. While the first metric is similar to SECREF17, the second can indicate if initially searching on the video space can be used to improve our selection:
Baselines ::: Results
Tables TABREF20, TABREF21, TABREF22 show the results. First, the tables show that the two first baselines under-perform for our task. Even with a tolerance window of 6, Baseline1 merely achieves an accuracy of .14. Baseline2, despite being a simpler task, has only an accuracy of .23. Second, while we originally hypothesized that the segment selection task should be easier than the sentence prediction task, Table TABREF21 shows that the task is also challenging. One possible reason is that the segments contained within the same transcript have similar contents, due to the composition of the overall task in each video, and differentiating among them may require a more sophisticated model than just using a sequence model for segment representation. Table TABREF22 shows the accuracy of retrieving the correct segment, for baseline both overall and given that the video selected is within the top 10 videos. While the overall accuracy is only .16, by reducing the search space to 10 relevant videos our accuracy increases to 0.6385. In future iterations, it may then be useful to find better approaches in filtering large paragraphs of text before predicting the correct segment.
Discussion and Future Work
We performed an error analysis on Baseline1's results. We first observe that, in 92% of the errors, the predicted span and the ground-truth overlap. Furthermore, in 56% of the errors, the predicted spans are a subset or superset of the ground-truth spans. This indicates that the model finds the rough answer regions but fails to locate the precise boundaries. To address this issue, we plan on exploring the Pointer-network BIBREF19, which finds an answer span by selecting the boundary sentences. Unlike Baseline1 which avoids an explicit segmentation step, the Pointer-network can explicitly model which sentences are likely to be a boundary sentence. Moreover, the search space of the spans in the Pointer-network is $2n$ where $n$ is the number of sentences, because it selects only two boundary sentences. Note that the search space of Baseline1 is $n^2$. A much smaller search space might improve the accuracy by making the model consider fewer candidates.
In future work, we also plan to use multi-modal information. While our baselines only used the transcript, complementing the narratives with the visual information may improve the performance, similarly to the text alignment task in BIBREF11.
Conclusion
We have described the collection, analysis, and baseline results of TutorialVQA, a new type of dataset used to find answer spans in tutorial videos. Our data collection method for question-answer pairs on instructional video can be further adopted to other domains where the answers involve multiple steps and are part of an overall goal, such as cooking or educational videos. We have shown that current baseline models for finding the answer spans are not sufficient for achieving high accuracy and hope that by releasing this new dataset and task, more appropriate question answering models can be developed for question answering on instructional videos. | a sentence-level prediction algorithm, a segment retrieval algorithm and a pipeline segment retrieval algorithm |
09621c9cd762e1409f22d501513858d67dcd3c7c | 09621c9cd762e1409f22d501513858d67dcd3c7c_0 | Q: What is the source of the triples?
Text: Introduction
Video is the fastest growing medium to create and deliver information today. Consequentially, videos have been increasingly used as main data sources in many question answering problems BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF2, BIBREF5. These previous studies have mostly focused on factoid questions, each of which can be answered in a few words or phrases generated by understanding multimodal contents in a short video clip.
However, this problem definition of video question answering causes some practical limitations for the following reasons. First, factoid questions are just a small part of what people actually want to ask on video contents. Especially if a short video is given to users, most fragmentary facts within the scope of previous tasks can be easily perceived by themselves even before asking questions. Thus, video question answering is expected to provide answers to more complicated non-factoid questions beyond the simple facts. For example, those questions could be the ones asking about a how procedure as shown in Fig. FIGREF5, and the answers should contain all necessary steps to complete the task.
Accordingly, the answer format needs to also be improved towards more flexible ways than multiple choice BIBREF1, BIBREF2 or fill-in-the-blank questions BIBREF3, BIBREF4. Although open-ended video question answering BIBREF0, BIBREF2, BIBREF5 has been explored, it still aims to generate just a short word or phrase-level answer, which is not enough to cover various granularities of non-factoid question answering.
The other issue is that most videos with sufficient amount of information, which are likely to be asked, have much longer lengths than the video clips in the existing datasets. Therefore, the most relevant part of a whole video needs to be determined prior to each answer generation in practice. However, this localization task has been out of scope for previous studies.
In this work, we propose a new question answering problem for non-factoid questions on instructional videos. According to the nature of the media created for educational purposes, we assume that many answers already exist within the given video contents. Under this assumption, we formulate the problem as a localization task to specify the span of a video segment as the direct answer to a given video and a question, as illustrated in Figure FIGREF1.
The remainder of this paper is structured as follows. Section SECREF3 introduces TutorialVQA dataset as a case study of our proposed problem. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. Section SECREF4 presents the baseline models and their experiment details on the sentence-level prediction and video segment retrieval tasks on our dataset. Then, we discuss the experimental results in Section SECREF5 and conclude the paper in Section SECREF6.
Related Work
Most relevant to our proposed work is the reading comprehension task, which is a question answering task involving a piece of text such as a paragraph or article. Such datasets for the reading comprehension task, such as SQuAD BIBREF6 based on Wikipedia, TriviaQA BIBREF7 constructed from trivia questions with answer evidence from Wikipedia, or those from Hermann et al. based on CNN and Daily Mail articles BIBREF8 are factoid-based, meaning the answers typically involve a single entity. Differing from video transcripts, the structures of these data sources, namely paragraphs from Wikipedia and news sources, are typically straightforward since they are meant to be read. In contrast, video transcripts originate from spoken dialogue, which can verbose, unstructured, and disconnected. Furthermore, the answers in instructional video transcripts can be longer, spanning multiple sentences if the process is multi-step or even fragmented into multiple segments throughout the video.
Visual corpora in particular have proven extremely valuable to visual questions-answering tasks BIBREF9, the most similar being MovieQA BIBREF1 and VideoQA BIBREF0. Similar to how our data is generated from video tutorials, the MovieQA and VideoQA corpus is generated from movie scripts and news transcipts, respectively. MovieQA's answers have a shorter span than the answers collected in our corpus, because questions and answer pairs were generated after each paragraph in a movie's plot synopsis BIBREF1. The MovieQA dataset also contains specific annotated answers with incorrect examples for each question. In the VideoQA dataset, questions focus on a single entity, contrary to our instructional video dataset. Although not necessarily a visual question-answering task, the work proposed by BIBREF10 involved answering questions over transcript data. Contrary to our work, Gupta's dataset is not publically available and their examples only showcase factoid-style questions involving single entity answers.
BIBREF11 focus on aligning a set of instructions to a video of someone carrying out those instructions. In their task, they use the video transcript to represent the video, which they later augment with a visual cue detector on food entities. Their task focuses on procedure-based cooking videos, and contrary to our task is primarily a text alignment task. In our task we aim to answer questions-using the transcripts-on instructional-style videos, in which the answer can involve steps not mentioned in the question.
TutorialVQA Dataset
In this section, we introduce the TutorialVQA dataset and describe the data collection process .
TutorialVQA Dataset ::: Overview
Our dataset consists of 76 tutorial videos pertaining to an image editing software. All of the videos include spoken instructions which are transcribed and manually segmented into multiple segments. Specifically, we asked the annotators to manually divide each video into multiple segments such that each of the segments can serve as an answer to any question. For example, Fig. FIGREF1 shows example segments marked in red (each which are a complete unit as an answer span). Each sentence is associated with the starting and ending time-stamps, which can be used to access the relevant visual information.
The dataset contains 6,195 non-factoid QA pairs, where the answers are the segments that were manually annotated. Fig. FIGREF5 shows an example of the annotations. video_id can be used to retrieve the video information such as meta information and the transcripts. answer_start and answer_end denote the starting and ending sentence indexes of the answer span. Table. TABREF4 shows the statistics of our dataset, with each answer segment having on average about 6 sentences, showing that our answers are more verbose than those in previous factoid QA tasks.
TutorialVQA Dataset ::: Basis
We chose videos pertaining to an image editing software because of the complexity and variety of tasks involved. In these videos, a narrator is communicating an overall goal by utilizing an example. For example, in FIGREF1 the video pertains to combining multiple layers into one image. However, throughout the videos multiple subtasks are achieved, such as the opening of multiple images, the masking of images, and the placement of two images side-by-side. These subtasks involve multiple steps and are of interest to us in segmenting the videos. Each segment can be seen as a subtask within a larger video dictating an example. We thus chose these videos because of the amount of procedural information stored in each video for which the user may ask. Though there is only one domain, each video corresponds to a different overall goal.
TutorialVQA Dataset ::: Data Collection
We downloaded 76 videos from a tutorial website about an image editing program . Each video is pre-processed to provide the transcripts and the time-stamp information for each sentence in the transcript. We then used Amazon Mechanical Turk to collect the question-answer pairs . One naive way of collecting the data is to prepare a question list and then, for each question, ask the workers to find the relevant parts in the video. However, this approach is not feasible and error-prone because the videos are typically long and finding a relevant part from a long video is difficult. Doing so might also cause us to miss questions which were relevant to the video segment. Instead, we took a reversed approach. First, for each video, we manually identified the sentence spans that can serve as answers. These candidates are of various granularity and may overlap. The segments are also complete in that they encompass the beginning and end of a task. In total, we identified 408 segments from the 76 videos. Second we asked AMT workers to provide question annotations for the videos.
Our AMT experiment consisted of two parts. In the first part, we presented the workers with the video content of a segment. For each segment, we asked workers to generate questions that can be answered by the presented segment. We did not limit the number of questions a worker can input to a corresponding segment and encouraged them to input a diverse set of questions which the span can answer. Along with the questions, the workers were also required to provide a justification as to why they made their questions. We manually checked this justification to filter out the questions with poor quality by removing those questions which were unrelated to the video. One initial challenge worth mentioning is that at first some workers input questions they had about the video and not questions which the video could answer. This was solved by providing them with an unrelated example. The second part of the question collection framework consisted of a paraphrasing task. In this task we presented workers with the questions generated by the first task and asked them to write the questions differently while keeping the semantics the same. In this way, we expanded our question dataset. After filtering out the questions with low quality, we collected a total of 6,195 questions.
It is important to note the differences between our data collection process and the the query generation process employed in the Search and Hyperlinking Task at MediaEval BIBREF12. In the Search and Hyperlinking Task, 30 users were tasked to first browse the collection of videos, select interesting segments with start and end times, and then asked to conjecture questions that they would use on a search query to find the interesting video segments. This was done in order to emulate their thought-proces mechanism. While the nature of their task involves queries relating to the overall videos themselves, hence coming from a video's interestingness, our task involves users already being given a video and formulating questions where the answers themselves come from within a video. By presenting the same video segment to many users, we maintain a consistent set of video segments and extend the possibility to generate a diverse set of question for the same segment.
TutorialVQA Dataset ::: Dataset Details
Table TABREF12 presents some extracted sample questions from our dataset. The first column corresponds to an AMT generated question, while the second column corresponds to the video ID where the segment can be found. As can be seen in the first two rows, multiple types of questions can be answered within the same video (but different segments). The last two rows display questions which belong to the same segment but correspond to different properties of the same entity, 'crop tool'. Here we observe different types of questions, such as "why", "how", "what", and "where", and can see why the answers may involve multiple steps. Some questions that the worked paraphrased were in the "yes/no" style, however our answer segments then provide an explanation to these questions.
Each answer segment was extracted from an image editing tutorial video that involved multiple steps and procedures to produce a final image, which can partially be seen in FIGREF1. The average number of sentences per video was approximately 52, with the maximum number of sentences contained in a video being 187. The sub-tasks in the tutorial include segments (and thus answers) on editing parts of images, instructions on using certain tools, possible actions that can be performed on an image, and identifying the locations of tools and features, with the shortest and longest segment having a span of 1 and 37 sentences respectively, demonstrating the heterogeneity of the answer spans.
Baselines
Our video question answering task is novel and to our knowledge, no model has been designed specifically for this task. As a first step towards solving this problem, we evaluated the performance of state-of-the-art models developed for other QA tasks, including a sentence-level prediction task and two segment retrieval tasks. In this section, we report their results on the TutorialVQA dataset.
Baselines ::: Baseline1: Sentence-level prediction
Given a transcript (a sequence of sentences) and a question, Baseline1 predicts (starting sentence index, ending sentence index). The model is based on RaSor BIBREF13, which has been developed for the SQuAD QA task BIBREF6. RaSor concatenates the embedding vectors of the starting and the ending words to represent a span. Following this idea, Baseline1 represents a span of sentences by concatenating the vectors of the starting and ending sentences. The left diagram in Fig. FIGREF15 illustrates the Baseline1 model.
Model. The model takes two inputs, a transcript, $\lbrace s_1, s_2, ... s_n\rbrace $ where $s_i$ are individual sentences and a question, $q$. The output is the span scores, $y$, the scores over all possible spans. GLoVe BIBREF14 is used for the word representations in the transcript and the questions. We use two bi-LSTMs BIBREF15 to encode the transcript.
where n is the number of sentences . The output of Passage-level Encoding, $p$, is a sequence of vector, $p_i$, which represents the latent meaning of each sentence. Then, the model combines each pair of sentence embeddings ($p_i$, $p_j$) to generate a span embedding.
where [$\cdot $,$\cdot $] indicates the concatenation. Finally, we use a one-layer feed forward network to compute a score between each span and a question.
In training, we use cross-entropy as an objective function. In testing, the span with the highest score is picked as an answer.
Metrics. We use tolerance accuracy BIBREF16, which measures how far away the predicted span is from the gold standard span, as a metric. The rationale behind the metric is that, in practice, it suffices to recommend a rough span which contains the answer – a difference of a few seconds would not matter much to the user.
Specifically, the predicted span is counted as correct if $|pred_{start} - gt_{start}| + |pred_{end} - gt_{end}| <=$ $k$, where $pred_{start/end}$ and $gt_{start/end}$ indicate the indices of the predicted and ground-truth starting and ending sentences, respectively. We then measure the percentage of correctly predicted questions among the entire test questions.
Baselines ::: Baseline2: Segment retrieval
We also considered a simpler task by casting our problem as a retrieval task. Specifically, in addition to a plain transcript, we also provided the model with the segmentation information which was created during the data collection phrase (See Section. SECREF3). Note that each segments corresponds to a candidate answer. Then, the task is to pick the best segment for given a query. This task is easier than Baseline1's task in that the segmentation information is provided to the model. Unlike Baseline1, however, it is unable to return an answer span at various granularities. Baseline2 is based on the attentive LSTM BIBREF17, which has been developed for the InsuranceQA task. The right diagram in Fig. FIGREF15 illustrates the Baseline2 model.
Model. The two inputs, $s$ and $q$ represent the segment text and a question. The model first encodes the two inputs.
$h^s$ is then re-weighted using attention weights.
where $\odot $ denotes the element-wise multiplication operation. The final score is computed using a one-layer feed-forward network.
During training, the model requires negative samples. For each positive example, (question, ground-truth segment), all the other segments in the same transcript are used as negative samples. Cross entropy is used as an objective function.
Metrics. We used accuracy and MRR (Mean Reciprocal Ranking) as metrics. The accuracy is
We split the ground-truth dataset to train/dev/test into the ratio of 6/2/2. The resulting size is 3,718 (train), 1,238 (dev) and 1,239 qa pairs (test).
Baselines ::: Baseline3: Pipeline Segment retrieval
We construct a pipelined approach through another segment retrieval task, calculating the cosine similarities between the segment and question embeddings. In this task however, we want to test the accuracy of retrieving the segments given that we first retrieve the correct video from our 76 videos. First, we generate the TF-IDF embeddings for the whole video transcripts and questions. The next step involves retrieving the videos which have the lowest cosine distance between the video transcripts and question. We then filter and store the top ten videos, reducing the number of computations required in the next step. Finally, we calculate the cosine distances between the question and the segments which belong to the filtered top 10 videos, marking it as correct if found in these videos. While the task is less computationally expensive than the previous baseline, we do not learn the segment representations, as this task is a simple retrieval task based on TF-IDF embeddings.
Model. The first two inputs are are the question, q, and video transcript, v, encoded by their TF-IDF vectors: BIBREF18:
We then filter the top 10 video transcripts(out of 76) with the minimum cosine distance, and further compute the TF-IDF vectors for their segments, Stop10n, where n = 10. We repeat the process for the corresponding segments:
selecting the segment with the minimal cosine distance distance to the query.
Metrics. To evaluate our pipeline approach we use overall accuracy after filtering and accuracy given that the segment is in the top 10 videos. While the first metric is similar to SECREF17, the second can indicate if initially searching on the video space can be used to improve our selection:
Baselines ::: Results
Tables TABREF20, TABREF21, TABREF22 show the results. First, the tables show that the two first baselines under-perform for our task. Even with a tolerance window of 6, Baseline1 merely achieves an accuracy of .14. Baseline2, despite being a simpler task, has only an accuracy of .23. Second, while we originally hypothesized that the segment selection task should be easier than the sentence prediction task, Table TABREF21 shows that the task is also challenging. One possible reason is that the segments contained within the same transcript have similar contents, due to the composition of the overall task in each video, and differentiating among them may require a more sophisticated model than just using a sequence model for segment representation. Table TABREF22 shows the accuracy of retrieving the correct segment, for baseline both overall and given that the video selected is within the top 10 videos. While the overall accuracy is only .16, by reducing the search space to 10 relevant videos our accuracy increases to 0.6385. In future iterations, it may then be useful to find better approaches in filtering large paragraphs of text before predicting the correct segment.
Discussion and Future Work
We performed an error analysis on Baseline1's results. We first observe that, in 92% of the errors, the predicted span and the ground-truth overlap. Furthermore, in 56% of the errors, the predicted spans are a subset or superset of the ground-truth spans. This indicates that the model finds the rough answer regions but fails to locate the precise boundaries. To address this issue, we plan on exploring the Pointer-network BIBREF19, which finds an answer span by selecting the boundary sentences. Unlike Baseline1 which avoids an explicit segmentation step, the Pointer-network can explicitly model which sentences are likely to be a boundary sentence. Moreover, the search space of the spans in the Pointer-network is $2n$ where $n$ is the number of sentences, because it selects only two boundary sentences. Note that the search space of Baseline1 is $n^2$. A much smaller search space might improve the accuracy by making the model consider fewer candidates.
In future work, we also plan to use multi-modal information. While our baselines only used the transcript, complementing the narratives with the visual information may improve the performance, similarly to the text alignment task in BIBREF11.
Conclusion
We have described the collection, analysis, and baseline results of TutorialVQA, a new type of dataset used to find answer spans in tutorial videos. Our data collection method for question-answer pairs on instructional video can be further adopted to other domains where the answers involve multiple steps and are part of an overall goal, such as cooking or educational videos. We have shown that current baseline models for finding the answer spans are not sufficient for achieving high accuracy and hope that by releasing this new dataset and task, more appropriate question answering models can be developed for question answering on instructional videos. | a tutorial website about an image editing program |
10ddac87daf153cf674589cc1c64a795907d5d9a | 10ddac87daf153cf674589cc1c64a795907d5d9a_0 | Q: How much better is performance of the proposed model compared to the state of the art in these various experiments?
Text: Introduction
Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively.
Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support.
The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers.
Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens.
Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis.
The main contributions of this article are as follows:
For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction.
This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task.
The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset.
We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model.
Related Works
Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts.
Related Works ::: Aspect Term Extraction
The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically.
Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process.
For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction.
Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task.
Related Works ::: Aspect Polarity Classification
Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques.
The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context.
BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously.
BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved.
Methodology
Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC.
This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy.
Methodology ::: Task Definition ::: Aspect Term Extraction
Similar to name entity recognition (NER) task, the ATE task is a kind of sequence labeling task, and prepare the input based on IOB labels. We design the IOB labels as $B_{asp}, I_{asp}, O$, and the labels indicate the beginning, inside and outside of the aspect terms, respectively. For ATE task, the input of the example review “The price is reasonable although the service is poor.” will be prepared as $S=\lbrace w_1,w_2 \cdots w_n\rbrace $, and $w$ stands for a token after tokenization, $n=10$ is the total number of tokens. The example will be labeled in $Y=\lbrace O, B_{asp}, O, O, O, O, B_{asp}, O, O, O\rbrace $.
Methodology ::: Task Definition ::: Aspect Polarity Classification
Aspect polarity classification is a multi-grained sub-task of sentiment analysis, aiming at predicting the aspect polarity for targeted aspects. Suppose that “The price is reasonable although the service is poor . ” is the input for APC task, consistently with ATE task, $S=\lbrace w_1,w_2 \cdots w_n\rbrace $ stands for all the token of the review, and $S^t=\lbrace w_i,w_{i+1} \cdots w_{j}\rbrace (1<=i<j<=n)$ is the aspect sequence within $S$, $i$ and $j$ are the beginning and end positions in $S$ respectively.
Methodology ::: Model Architecture
Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect.
Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context.
Methodology ::: Model Architecture ::: BERT-Shared Layer
The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features.
$O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively.
Methodology ::: Multi-Head Self-Attention
Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows:
$Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12.
The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability.
Methodology ::: Local Context Focus ::: Semantic-Relative Distance
The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information.
SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as:
where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect.
Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact.
Methodology ::: Local Context Focus ::: Context-features Dynamic Mask
Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position.
According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows,
To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors.
Finally the local context features learned by the CDM layer are delivered as $O^{l}$.
Methodology ::: Local Context Focus ::: Context-features Dynamic Weighting
Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect.
where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation.
$O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context.
$W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features.
Methodology ::: Feature Interactive Learning
LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification.
$O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$.
Methodology ::: Aspect Polarity Classifier
Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity.
where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier.
Methodology ::: Aspect Term Extractor
Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$,
where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier.
Methodology ::: Training Details
The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC.
Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively.
The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task,
where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is
where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows:
Experiments ::: Datasets and Hyperparameters Setting
To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments.
The table demonstrates the details of these datasets.
The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority.
Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD.
Experiments ::: Compared Methods
We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks.
ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets.
ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity.
GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets.
AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification.
BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task.
BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model.
BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking.
BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset.
LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism.
LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task.
LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process.
Experiments ::: Results Analysis
The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied.
Experiments ::: Results Analysis ::: Performance on Chinese Review Datasets
Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets.
Experiments ::: Results Analysis ::: Performance on SemEval-2014 task4
Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models.
The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models.
Experiments ::: Overall Performance Analysis
Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance.
The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time.
We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%.
ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models.
Experiments ::: Overall Performance Analysis ::: Effectiveness of Multi-task Learning
Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model .
The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks.
Experiments ::: Overall Performance Analysis ::: Domain-adaption for LCF-ATEPC
The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning.
Experiments ::: Overall Performance Analysis ::: SRD Sensitivity on Different Datasets
We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process.
For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7.
For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process.
Conclusion
The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks.
Acknowledgments and Funding
Thanks to the anonymous reviewers and the scholars who helped us. This research is supported by the Innovation Project of Graduate School of South China Normal University and funded by National Natural Science Foundation of China, Multi-modal Brain-Computer Interface and Its Application in Patients with Consciousness Disorder, Project approval number: 61876067. | significantly improves the accuracy and F1 score of aspect polarity classification |
6cd874c4ae8e70f3c98c7176191c13a7decfbc45 | 6cd874c4ae8e70f3c98c7176191c13a7decfbc45_0 | Q: What was state of the art on SemEval-2014 task4 Restaurant and Laptop dataset?
Text: Introduction
Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively.
Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support.
The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers.
Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens.
Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis.
The main contributions of this article are as follows:
For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction.
This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task.
The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset.
We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model.
Related Works
Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts.
Related Works ::: Aspect Term Extraction
The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically.
Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process.
For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction.
Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task.
Related Works ::: Aspect Polarity Classification
Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques.
The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context.
BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously.
BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved.
Methodology
Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC.
This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy.
Methodology ::: Task Definition ::: Aspect Term Extraction
Similar to name entity recognition (NER) task, the ATE task is a kind of sequence labeling task, and prepare the input based on IOB labels. We design the IOB labels as $B_{asp}, I_{asp}, O$, and the labels indicate the beginning, inside and outside of the aspect terms, respectively. For ATE task, the input of the example review “The price is reasonable although the service is poor.” will be prepared as $S=\lbrace w_1,w_2 \cdots w_n\rbrace $, and $w$ stands for a token after tokenization, $n=10$ is the total number of tokens. The example will be labeled in $Y=\lbrace O, B_{asp}, O, O, O, O, B_{asp}, O, O, O\rbrace $.
Methodology ::: Task Definition ::: Aspect Polarity Classification
Aspect polarity classification is a multi-grained sub-task of sentiment analysis, aiming at predicting the aspect polarity for targeted aspects. Suppose that “The price is reasonable although the service is poor . ” is the input for APC task, consistently with ATE task, $S=\lbrace w_1,w_2 \cdots w_n\rbrace $ stands for all the token of the review, and $S^t=\lbrace w_i,w_{i+1} \cdots w_{j}\rbrace (1<=i<j<=n)$ is the aspect sequence within $S$, $i$ and $j$ are the beginning and end positions in $S$ respectively.
Methodology ::: Model Architecture
Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect.
Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context.
Methodology ::: Model Architecture ::: BERT-Shared Layer
The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features.
$O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively.
Methodology ::: Multi-Head Self-Attention
Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows:
$Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12.
The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability.
Methodology ::: Local Context Focus ::: Semantic-Relative Distance
The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information.
SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as:
where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect.
Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact.
Methodology ::: Local Context Focus ::: Context-features Dynamic Mask
Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position.
According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows,
To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors.
Finally the local context features learned by the CDM layer are delivered as $O^{l}$.
Methodology ::: Local Context Focus ::: Context-features Dynamic Weighting
Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect.
where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation.
$O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context.
$W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features.
Methodology ::: Feature Interactive Learning
LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification.
$O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$.
Methodology ::: Aspect Polarity Classifier
Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity.
where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier.
Methodology ::: Aspect Term Extractor
Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$,
where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier.
Methodology ::: Training Details
The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC.
Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively.
The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task,
where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is
where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows:
Experiments ::: Datasets and Hyperparameters Setting
To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments.
The table demonstrates the details of these datasets.
The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority.
Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD.
Experiments ::: Compared Methods
We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks.
ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets.
ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity.
GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets.
AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification.
BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task.
BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model.
BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking.
BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset.
LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism.
LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task.
LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process.
Experiments ::: Results Analysis
The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied.
Experiments ::: Results Analysis ::: Performance on Chinese Review Datasets
Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets.
Experiments ::: Results Analysis ::: Performance on SemEval-2014 task4
Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models.
The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models.
Experiments ::: Overall Performance Analysis
Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance.
The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time.
We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%.
ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models.
Experiments ::: Overall Performance Analysis ::: Effectiveness of Multi-task Learning
Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model .
The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks.
Experiments ::: Overall Performance Analysis ::: Domain-adaption for LCF-ATEPC
The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning.
Experiments ::: Overall Performance Analysis ::: SRD Sensitivity on Different Datasets
We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process.
For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7.
For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process.
Conclusion
The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks.
Acknowledgments and Funding
Thanks to the anonymous reviewers and the scholars who helped us. This research is supported by the Innovation Project of Graduate School of South China Normal University and funded by National Natural Science Foundation of China, Multi-modal Brain-Computer Interface and Its Application in Patients with Consciousness Disorder, Project approval number: 61876067. | BERT-ADA, BERT-PT, AEN-BERT, SDGCN-BERT |
b807dd3d42251615b881632caa5e331e2203d269 | b807dd3d42251615b881632caa5e331e2203d269_0 | Q: What was previous state-of-the-art on four Chinese reviews datasets?
Text: Introduction
Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively.
Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support.
The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers.
Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens.
Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis.
The main contributions of this article are as follows:
For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction.
This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task.
The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset.
We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model.
Related Works
Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts.
Related Works ::: Aspect Term Extraction
The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically.
Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process.
For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction.
Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task.
Related Works ::: Aspect Polarity Classification
Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques.
The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context.
BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously.
BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved.
Methodology
Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC.
This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy.
Methodology ::: Task Definition ::: Aspect Term Extraction
Similar to name entity recognition (NER) task, the ATE task is a kind of sequence labeling task, and prepare the input based on IOB labels. We design the IOB labels as $B_{asp}, I_{asp}, O$, and the labels indicate the beginning, inside and outside of the aspect terms, respectively. For ATE task, the input of the example review “The price is reasonable although the service is poor.” will be prepared as $S=\lbrace w_1,w_2 \cdots w_n\rbrace $, and $w$ stands for a token after tokenization, $n=10$ is the total number of tokens. The example will be labeled in $Y=\lbrace O, B_{asp}, O, O, O, O, B_{asp}, O, O, O\rbrace $.
Methodology ::: Task Definition ::: Aspect Polarity Classification
Aspect polarity classification is a multi-grained sub-task of sentiment analysis, aiming at predicting the aspect polarity for targeted aspects. Suppose that “The price is reasonable although the service is poor . ” is the input for APC task, consistently with ATE task, $S=\lbrace w_1,w_2 \cdots w_n\rbrace $ stands for all the token of the review, and $S^t=\lbrace w_i,w_{i+1} \cdots w_{j}\rbrace (1<=i<j<=n)$ is the aspect sequence within $S$, $i$ and $j$ are the beginning and end positions in $S$ respectively.
Methodology ::: Model Architecture
Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect.
Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context.
Methodology ::: Model Architecture ::: BERT-Shared Layer
The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features.
$O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively.
Methodology ::: Multi-Head Self-Attention
Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows:
$Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12.
The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability.
Methodology ::: Local Context Focus ::: Semantic-Relative Distance
The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information.
SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as:
where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect.
Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact.
Methodology ::: Local Context Focus ::: Context-features Dynamic Mask
Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position.
According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows,
To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors.
Finally the local context features learned by the CDM layer are delivered as $O^{l}$.
Methodology ::: Local Context Focus ::: Context-features Dynamic Weighting
Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect.
where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation.
$O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context.
$W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features.
Methodology ::: Feature Interactive Learning
LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification.
$O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$.
Methodology ::: Aspect Polarity Classifier
Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity.
where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier.
Methodology ::: Aspect Term Extractor
Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$,
where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier.
Methodology ::: Training Details
The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC.
Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively.
The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task,
where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is
where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows:
Experiments ::: Datasets and Hyperparameters Setting
To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments.
The table demonstrates the details of these datasets.
The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority.
Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD.
Experiments ::: Compared Methods
We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks.
ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets.
ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity.
GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets.
AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification.
BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task.
BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model.
BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking.
BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset.
LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism.
LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task.
LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process.
Experiments ::: Results Analysis
The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied.
Experiments ::: Results Analysis ::: Performance on Chinese Review Datasets
Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets.
Experiments ::: Results Analysis ::: Performance on SemEval-2014 task4
Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models.
The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models.
Experiments ::: Overall Performance Analysis
Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance.
The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time.
We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%.
ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models.
Experiments ::: Overall Performance Analysis ::: Effectiveness of Multi-task Learning
Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model .
The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks.
Experiments ::: Overall Performance Analysis ::: Domain-adaption for LCF-ATEPC
The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning.
Experiments ::: Overall Performance Analysis ::: SRD Sensitivity on Different Datasets
We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process.
For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7.
For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process.
Conclusion
The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks.
Acknowledgments and Funding
Thanks to the anonymous reviewers and the scholars who helped us. This research is supported by the Innovation Project of Graduate School of South China Normal University and funded by National Natural Science Foundation of China, Multi-modal Brain-Computer Interface and Its Application in Patients with Consciousness Disorder, Project approval number: 61876067. | GANN obtained the state-of-the-art APC performance on the Chinese review datasets |
d39c911bf2479fdb7af339b59acb32073242fab3 | d39c911bf2479fdb7af339b59acb32073242fab3_0 | Q: In what four Chinese review datasets does LCF-ATEPC achieves state of the art?
Text: Introduction
Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively.
Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support.
The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers.
Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens.
Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis.
The main contributions of this article are as follows:
For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction.
This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task.
The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset.
We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model.
Related Works
Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts.
Related Works ::: Aspect Term Extraction
The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically.
Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process.
For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction.
Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task.
Related Works ::: Aspect Polarity Classification
Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques.
The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context.
BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously.
BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved.
Methodology
Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC.
This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy.
Methodology ::: Task Definition ::: Aspect Term Extraction
Similar to name entity recognition (NER) task, the ATE task is a kind of sequence labeling task, and prepare the input based on IOB labels. We design the IOB labels as $B_{asp}, I_{asp}, O$, and the labels indicate the beginning, inside and outside of the aspect terms, respectively. For ATE task, the input of the example review “The price is reasonable although the service is poor.” will be prepared as $S=\lbrace w_1,w_2 \cdots w_n\rbrace $, and $w$ stands for a token after tokenization, $n=10$ is the total number of tokens. The example will be labeled in $Y=\lbrace O, B_{asp}, O, O, O, O, B_{asp}, O, O, O\rbrace $.
Methodology ::: Task Definition ::: Aspect Polarity Classification
Aspect polarity classification is a multi-grained sub-task of sentiment analysis, aiming at predicting the aspect polarity for targeted aspects. Suppose that “The price is reasonable although the service is poor . ” is the input for APC task, consistently with ATE task, $S=\lbrace w_1,w_2 \cdots w_n\rbrace $ stands for all the token of the review, and $S^t=\lbrace w_i,w_{i+1} \cdots w_{j}\rbrace (1<=i<j<=n)$ is the aspect sequence within $S$, $i$ and $j$ are the beginning and end positions in $S$ respectively.
Methodology ::: Model Architecture
Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect.
Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context.
Methodology ::: Model Architecture ::: BERT-Shared Layer
The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features.
$O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively.
Methodology ::: Multi-Head Self-Attention
Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows:
$Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12.
The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability.
Methodology ::: Local Context Focus ::: Semantic-Relative Distance
The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information.
SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as:
where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect.
Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact.
Methodology ::: Local Context Focus ::: Context-features Dynamic Mask
Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position.
According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows,
To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors.
Finally the local context features learned by the CDM layer are delivered as $O^{l}$.
Methodology ::: Local Context Focus ::: Context-features Dynamic Weighting
Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect.
where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation.
$O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context.
$W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features.
Methodology ::: Feature Interactive Learning
LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification.
$O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$.
Methodology ::: Aspect Polarity Classifier
Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity.
where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier.
Methodology ::: Aspect Term Extractor
Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$,
where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier.
Methodology ::: Training Details
The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC.
Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively.
The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task,
where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is
where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows:
Experiments ::: Datasets and Hyperparameters Setting
To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments.
The table demonstrates the details of these datasets.
The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority.
Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD.
Experiments ::: Compared Methods
We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks.
ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets.
ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity.
GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets.
AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification.
BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task.
BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model.
BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking.
BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset.
LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism.
LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task.
LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process.
Experiments ::: Results Analysis
The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied.
Experiments ::: Results Analysis ::: Performance on Chinese Review Datasets
Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets.
Experiments ::: Results Analysis ::: Performance on SemEval-2014 task4
Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models.
The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models.
Experiments ::: Overall Performance Analysis
Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance.
The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time.
We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%.
ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models.
Experiments ::: Overall Performance Analysis ::: Effectiveness of Multi-task Learning
Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model .
The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks.
Experiments ::: Overall Performance Analysis ::: Domain-adaption for LCF-ATEPC
The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning.
Experiments ::: Overall Performance Analysis ::: SRD Sensitivity on Different Datasets
We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process.
For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7.
For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process.
Conclusion
The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks.
Acknowledgments and Funding
Thanks to the anonymous reviewers and the scholars who helped us. This research is supported by the Innovation Project of Graduate School of South China Normal University and funded by National Natural Science Foundation of China, Multi-modal Brain-Computer Interface and Its Application in Patients with Consciousness Disorder, Project approval number: 61876067. | Car, Phone, Notebook, Camera |
f53be1266be1fea5598a671080226c9c983b69e3 | f53be1266be1fea5598a671080226c9c983b69e3_0 | Q: Why authors think that researches do not pay attention to the research of the Chinese-oriented ABSA task?
Text: Introduction
Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively.
Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support.
The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers.
Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens.
Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis.
The main contributions of this article are as follows:
For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction.
This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task.
The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset.
We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model.
Related Works
Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts.
Related Works ::: Aspect Term Extraction
The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically.
Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process.
For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction.
Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task.
Related Works ::: Aspect Polarity Classification
Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques.
The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context.
BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously.
BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved.
Methodology
Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC.
This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy.
Methodology ::: Task Definition ::: Aspect Term Extraction
Similar to name entity recognition (NER) task, the ATE task is a kind of sequence labeling task, and prepare the input based on IOB labels. We design the IOB labels as $B_{asp}, I_{asp}, O$, and the labels indicate the beginning, inside and outside of the aspect terms, respectively. For ATE task, the input of the example review “The price is reasonable although the service is poor.” will be prepared as $S=\lbrace w_1,w_2 \cdots w_n\rbrace $, and $w$ stands for a token after tokenization, $n=10$ is the total number of tokens. The example will be labeled in $Y=\lbrace O, B_{asp}, O, O, O, O, B_{asp}, O, O, O\rbrace $.
Methodology ::: Task Definition ::: Aspect Polarity Classification
Aspect polarity classification is a multi-grained sub-task of sentiment analysis, aiming at predicting the aspect polarity for targeted aspects. Suppose that “The price is reasonable although the service is poor . ” is the input for APC task, consistently with ATE task, $S=\lbrace w_1,w_2 \cdots w_n\rbrace $ stands for all the token of the review, and $S^t=\lbrace w_i,w_{i+1} \cdots w_{j}\rbrace (1<=i<j<=n)$ is the aspect sequence within $S$, $i$ and $j$ are the beginning and end positions in $S$ respectively.
Methodology ::: Model Architecture
Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect.
Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context.
Methodology ::: Model Architecture ::: BERT-Shared Layer
The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features.
$O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively.
Methodology ::: Multi-Head Self-Attention
Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows:
$Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12.
The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability.
Methodology ::: Local Context Focus ::: Semantic-Relative Distance
The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information.
SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as:
where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect.
Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact.
Methodology ::: Local Context Focus ::: Context-features Dynamic Mask
Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position.
According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows,
To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors.
Finally the local context features learned by the CDM layer are delivered as $O^{l}$.
Methodology ::: Local Context Focus ::: Context-features Dynamic Weighting
Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect.
where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation.
$O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context.
$W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features.
Methodology ::: Feature Interactive Learning
LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification.
$O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$.
Methodology ::: Aspect Polarity Classifier
Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity.
where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier.
Methodology ::: Aspect Term Extractor
Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$,
where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier.
Methodology ::: Training Details
The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC.
Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively.
The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task,
where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is
where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows:
Experiments ::: Datasets and Hyperparameters Setting
To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments.
The table demonstrates the details of these datasets.
The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority.
Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD.
Experiments ::: Compared Methods
We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks.
ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets.
ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity.
GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets.
AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification.
BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task.
BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model.
BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking.
BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset.
LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism.
LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task.
LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process.
Experiments ::: Results Analysis
The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied.
Experiments ::: Results Analysis ::: Performance on Chinese Review Datasets
Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets.
Experiments ::: Results Analysis ::: Performance on SemEval-2014 task4
Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models.
The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models.
Experiments ::: Overall Performance Analysis
Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance.
The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time.
We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%.
ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models.
Experiments ::: Overall Performance Analysis ::: Effectiveness of Multi-task Learning
Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model .
The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks.
Experiments ::: Overall Performance Analysis ::: Domain-adaption for LCF-ATEPC
The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning.
Experiments ::: Overall Performance Analysis ::: SRD Sensitivity on Different Datasets
We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process.
For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7.
For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process.
Conclusion
The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks.
Acknowledgments and Funding
Thanks to the anonymous reviewers and the scholars who helped us. This research is supported by the Innovation Project of Graduate School of South China Normal University and funded by National Natural Science Foundation of China, Multi-modal Brain-Computer Interface and Its Application in Patients with Consciousness Disorder, Project approval number: 61876067. | Unanswerable |
3be8859103016ce2afe4c0a8552b9d980f7958bf | 3be8859103016ce2afe4c0a8552b9d980f7958bf_0 | Q: What is specific to Chinese-oriented ABSA task, how is it different from other languages?
Text: Introduction
Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively.
Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support.
The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers.
Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens.
Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis.
The main contributions of this article are as follows:
For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction.
This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task.
The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset.
We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model.
Related Works
Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts.
Related Works ::: Aspect Term Extraction
The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically.
Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process.
For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction.
Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task.
Related Works ::: Aspect Polarity Classification
Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques.
The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context.
BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously.
BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved.
Methodology
Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC.
This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy.
Methodology ::: Task Definition ::: Aspect Term Extraction
Similar to name entity recognition (NER) task, the ATE task is a kind of sequence labeling task, and prepare the input based on IOB labels. We design the IOB labels as $B_{asp}, I_{asp}, O$, and the labels indicate the beginning, inside and outside of the aspect terms, respectively. For ATE task, the input of the example review “The price is reasonable although the service is poor.” will be prepared as $S=\lbrace w_1,w_2 \cdots w_n\rbrace $, and $w$ stands for a token after tokenization, $n=10$ is the total number of tokens. The example will be labeled in $Y=\lbrace O, B_{asp}, O, O, O, O, B_{asp}, O, O, O\rbrace $.
Methodology ::: Task Definition ::: Aspect Polarity Classification
Aspect polarity classification is a multi-grained sub-task of sentiment analysis, aiming at predicting the aspect polarity for targeted aspects. Suppose that “The price is reasonable although the service is poor . ” is the input for APC task, consistently with ATE task, $S=\lbrace w_1,w_2 \cdots w_n\rbrace $ stands for all the token of the review, and $S^t=\lbrace w_i,w_{i+1} \cdots w_{j}\rbrace (1<=i<j<=n)$ is the aspect sequence within $S$, $i$ and $j$ are the beginning and end positions in $S$ respectively.
Methodology ::: Model Architecture
Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect.
Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context.
Methodology ::: Model Architecture ::: BERT-Shared Layer
The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features.
$O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively.
Methodology ::: Multi-Head Self-Attention
Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows:
$Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12.
The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability.
Methodology ::: Local Context Focus ::: Semantic-Relative Distance
The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information.
SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as:
where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect.
Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact.
Methodology ::: Local Context Focus ::: Context-features Dynamic Mask
Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position.
According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows,
To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors.
Finally the local context features learned by the CDM layer are delivered as $O^{l}$.
Methodology ::: Local Context Focus ::: Context-features Dynamic Weighting
Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect.
where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation.
$O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context.
$W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features.
Methodology ::: Feature Interactive Learning
LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification.
$O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$.
Methodology ::: Aspect Polarity Classifier
Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity.
where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier.
Methodology ::: Aspect Term Extractor
Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$,
where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier.
Methodology ::: Training Details
The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC.
Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively.
The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task,
where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is
where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows:
Experiments ::: Datasets and Hyperparameters Setting
To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments.
The table demonstrates the details of these datasets.
The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority.
Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD.
Experiments ::: Compared Methods
We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks.
ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets.
ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity.
GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets.
AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification.
BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task.
BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model.
BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking.
BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset.
LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism.
LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task.
LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process.
Experiments ::: Results Analysis
The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied.
Experiments ::: Results Analysis ::: Performance on Chinese Review Datasets
Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets.
Experiments ::: Results Analysis ::: Performance on SemEval-2014 task4
Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models.
The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models.
Experiments ::: Overall Performance Analysis
Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance.
The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time.
We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%.
ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models.
Experiments ::: Overall Performance Analysis ::: Effectiveness of Multi-task Learning
Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model .
The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks.
Experiments ::: Overall Performance Analysis ::: Domain-adaption for LCF-ATEPC
The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning.
Experiments ::: Overall Performance Analysis ::: SRD Sensitivity on Different Datasets
We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process.
For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7.
For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process.
Conclusion
The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks.
Acknowledgments and Funding
Thanks to the anonymous reviewers and the scholars who helped us. This research is supported by the Innovation Project of Graduate School of South China Normal University and funded by National Natural Science Foundation of China, Multi-modal Brain-Computer Interface and Its Application in Patients with Consciousness Disorder, Project approval number: 61876067. | Unanswerable |
e9f868f22ae70c7681c28228b6019e155013f8d6 | e9f868f22ae70c7681c28228b6019e155013f8d6_0 | Q: what is the size of this dataset?
Text: Introduction
Traditionally, a word is represented as a sparse vector indicating the word itself (one-hot vector) or the context of the word (distributional vector). However, both the one-hot notation and distributional notation suffer from data sparseness since dimensions of the word vector do not interact with each other. Distributed word representation addresses the data sparseness problem by constructing a dense vector of a fixed length, wherein contexts are shared (or distributed) across dimensions. Distributed word representation is known to improve the performance of many NLP applications such as machine translation BIBREF0 and sentiment analysis BIBREF1 to name a few. The task to learn a distributed representation is called representation learning.
However, evaluating the quality of learned distributed word representation itself is not straightforward. In language modeling, perplexity or cross-entropy is widely accepted as a de facto standard for intrinsic evaluation. In contrast, distributed word representations include the additive (or compositional) property of the vectors, which cannot be assessed by perplexity. Moreover, perplexity makes little use of infrequent words; thus, it is not appropriate for evaluating distributed presentations that try to represent them.
Therefore, a word similarity task and/or a word analogy task are generally used to evaluate distributed word representations in the NLP literature. The former judges whether distributed word representations improve modeling contexts, and the latter estimates how well the learned representations achieve the additive property. However, such resources other than for English (e.g., Japanese) seldom exist. In addition, most of these datasets comprise high-frequency nouns so that they tend not to include other parts of speech. Hence, previous data fail to evaluate word representations of other parts of speech, including content words such as verbs and adjectives.
To address the problem of the lack of a dataset for evaluating Japanese distributed word representations, we propose to build a Japanese dataset for the word similarity task.
The main contributions of our work are as follows:
Related Work
In general, distributed word representations are evaluated using a word similarity task. For instance, WordSim353 2002:PSC:503104.503110, MC BIBREF2 , RG BIBREF3 , and SCWS Huang:2012:IWR:2390524.2390645 have been used to evaluate word similarities in English. Moreover, baker-reichart-korhonen:2014:EMNLP2014 built a verb similarity dataset (VSD) based on WordSim353 because there was no dataset of verbs in the word-similarity task. Recently, SimVerb-3500 was introduced to evaluate human understanding of verb meaning Gerz:2016:EMNLP. It provides human ratings for the similarity of 3,500 verb pairs so that it enables robust evaluation of distributed representation for verbs. However, most of these datasets include English words only. There has been no Japanese dataset for the word-similarity task.
Apart from English, WordSim353 and SimLex-999 Hill:2015:CL have been translated and rescored in other languages: German, Italian and Russian Leviant:2015:arXiv. SimLex-999 has also been translated and rescored in Hebrew and Croatian Mrksic:2017:TACL. SimLex-999 explicitly targets at similarity rather than relatedness and includes adjective, noun and verb pairs. However, this dataset contains only frequent words.
In addition, the distributed representation of words is generally learned using only word-level information. Consequently, the distributed representation for low-frequency words and unknown words cannot be learned well with conventional models. However, low-frequency words and unknown words are often comprise high-frequency morphemes (e.g., unkingly INLINEFORM0 un + king + ly). Some previous studies take advantage of the morphological information to provide a suitable representation for low-frequency words and unknown words BIBREF4 , BIBREF5 . Morphological information is particularly important for Japanese since Japanese is an agglutinative language.
Construction of a Japanese Word Similarity Dataset
What makes a pair of words similar? Most of the previous datasets do not concretely define the similarity of word pairs. The difference in the similarity of word pairs originates from each annotator's mind, resulting in different scales of a word. Thus, we propose to use an example-based approach (Table TABREF9 ) to control the variance of the similarity ratings. We remove the context of word when we extracted the word. So, we consider that an ambiguous word has high variance of the similarity, but we can get low variance of the similarity when the word is monosemous.
For this study, we constructed a Japanese word similarity dataset. We followed the procedure used to construct the Stanford Rare Word Similarity Dataset (RW) Luong-etal:conll13:morpho.
We extracted Japanese word pairs from the Evaluation Dataset of Japanese Lexical Simplification kodaira. It targeted content words (nouns, verbs, adjectives, adverbs). It included 10 contexts about target words annotated with their lexical substitutions and rankings. Figure FIGREF1 shows an example of the dataset. A word in square brackets in the text is represented as a target word of simplification. A target word is not only recorded in the lemma form but also in the conjugated form. We built a Japanese similarity dataset from this dataset using the following procedure.
Comparison to Other Datasets
Table TABREF17 shows how several resources vary. WordSim353 comprises high-frequency words and so the variance tends to be low. In contrast, RW includes low-frequency words, unknown words, and complex words composed of several morphemes; thus, the variance is large. VSD has many polysemous words, which increase the variance. Despite the fact that our dataset, similar to the VSD and RW datasets, contains low-frequency and ambiguous words, its variance is 3.00. The variance level is low compared with the other corpora. We considered that the examples of the similarity in the task request reduced the variance level.
We did not expect SCWS to have the largest variance in the datasets shown in Table TABREF17 because it gave the context to annotators during annotation. At the beginning, we thought the context would serve to remove the ambiguity and clarify the meaning of word; however after looking into the dataset, we determined that the construction procedure used several extraordinary annotators. It is crucial to filter insincere annotators and provide straightforward instructions to improve the quality of the similarity annotation like we did.
To gain better similarity, each dataset should utilize the reliability score to exclude extraordinary annotators. For example, for SCWS, an annotator rating the similarity of pair of “CD” and “aglow” assigned a rating of 10. We assumed it was a typo or misunderstanding regarding the words. To address this problem, such an annotation should be removed before calculating the true similarity. All the datasets except for RW simply calculated the average of the similarity, but datasets created using crowdsourcing should consider the reliability of the annotator.
Analysis
We present examples of a pair with high variance of similarity as shown below:
(e.g., a pairing of “fastUTF8min(速い)” and “earlyUTF8min(早い)”.)
Although they are similar in meaning with respect to the time, they have nothing in common with respect to speed; Annotator A assigned a rating of 10, but Annotator B assigned a rating of 1.
Another example, the pairing of “be eagerUTF8min(懇願する)” and “requestUTF8min(頼む)”. Even though the act indicated by the two verbs is the same, there are some cases where they express different degrees of feeling. Compared with “request”, “eager” indicates a stronger feeling. There were two annotators who emphasized the similarity of the act itself rather than the different degrees of feeling, and vice versa. In this case, Annotator A assigned a rating of 9, but Annotator B assigned a rating of 2.
Although it was necessary to distinguish similarity and semantic relatedness BIBREF7 and we asked annotators to rate the pairs based on semantic similarity, it was not straightforward to put paraphrase candidates onto a single scale considering all the attributes of the words. This limitation might be relaxed if we would ask annotators to refer to a thesaurus or an ontology such as Japanese Lexicon GoiTaikei:1997.
(e.g., a pairing of “sloganUTF8min(スローガン)” and “sloganUTF8min(標語)”.)
In Japanese, we can write a word using hiragana, katakana, or kanji characters; however because hiragana and katakana represent only the pronunciation of a word, annotators might think of different words. In this case, Annotator A assigned a rating of 8, but Annotator B assigned a rating of 0. Similarly, we confirmed the same thing in other parts of speech. Especially, nouns can have several word pairs with different spellings, which results in their IAA became too low compared to other parts of speech.
(e.g., a pairing of “oftenUTF8min(しばしば)” and “frequentlyUTF8min(しきりに)”.)
We confirmed that the variance becomes larger among adverbs expressing frequency. This is due to the difference in the frequency of words that annotators imagines. In this case, Annotator A assigned a rating of 9, but Annotator B assigned a rating of 0. Similarly, we confirmed the same thing among adverbs expressing time.
Conclusion
In this study, we constructed the first Japanese word similarity dataset. It contains various parts of speech and includes rare words in addition to common words. Crowdsourced annotators assigned similarity to word pairs during the word similarity task. We gave examples of similarity in the task request sent to annotators, so that we reduced the variance of each word pair. However, we did not restrict the attributes of words, such as the level of feeling, during annotation. Error analysis revealed that the notion of similarity should be carefully defined when constructing a similarity dataset.
As a future work, we plan to construct a word analogy dataset in Japanese by translating an English dataset to Japanese. We hope that a Japanese database will facilitate research in Japanese distributed representations.
Language Resource References
lrec lrec2018 | Unanswerable |
7aaaf7bff9947c6d3b954ae25be87e6e1c49db6d | 7aaaf7bff9947c6d3b954ae25be87e6e1c49db6d_0 | Q: did they use a crowdsourcing platform for annotations?
Text: Introduction
Traditionally, a word is represented as a sparse vector indicating the word itself (one-hot vector) or the context of the word (distributional vector). However, both the one-hot notation and distributional notation suffer from data sparseness since dimensions of the word vector do not interact with each other. Distributed word representation addresses the data sparseness problem by constructing a dense vector of a fixed length, wherein contexts are shared (or distributed) across dimensions. Distributed word representation is known to improve the performance of many NLP applications such as machine translation BIBREF0 and sentiment analysis BIBREF1 to name a few. The task to learn a distributed representation is called representation learning.
However, evaluating the quality of learned distributed word representation itself is not straightforward. In language modeling, perplexity or cross-entropy is widely accepted as a de facto standard for intrinsic evaluation. In contrast, distributed word representations include the additive (or compositional) property of the vectors, which cannot be assessed by perplexity. Moreover, perplexity makes little use of infrequent words; thus, it is not appropriate for evaluating distributed presentations that try to represent them.
Therefore, a word similarity task and/or a word analogy task are generally used to evaluate distributed word representations in the NLP literature. The former judges whether distributed word representations improve modeling contexts, and the latter estimates how well the learned representations achieve the additive property. However, such resources other than for English (e.g., Japanese) seldom exist. In addition, most of these datasets comprise high-frequency nouns so that they tend not to include other parts of speech. Hence, previous data fail to evaluate word representations of other parts of speech, including content words such as verbs and adjectives.
To address the problem of the lack of a dataset for evaluating Japanese distributed word representations, we propose to build a Japanese dataset for the word similarity task.
The main contributions of our work are as follows:
Related Work
In general, distributed word representations are evaluated using a word similarity task. For instance, WordSim353 2002:PSC:503104.503110, MC BIBREF2 , RG BIBREF3 , and SCWS Huang:2012:IWR:2390524.2390645 have been used to evaluate word similarities in English. Moreover, baker-reichart-korhonen:2014:EMNLP2014 built a verb similarity dataset (VSD) based on WordSim353 because there was no dataset of verbs in the word-similarity task. Recently, SimVerb-3500 was introduced to evaluate human understanding of verb meaning Gerz:2016:EMNLP. It provides human ratings for the similarity of 3,500 verb pairs so that it enables robust evaluation of distributed representation for verbs. However, most of these datasets include English words only. There has been no Japanese dataset for the word-similarity task.
Apart from English, WordSim353 and SimLex-999 Hill:2015:CL have been translated and rescored in other languages: German, Italian and Russian Leviant:2015:arXiv. SimLex-999 has also been translated and rescored in Hebrew and Croatian Mrksic:2017:TACL. SimLex-999 explicitly targets at similarity rather than relatedness and includes adjective, noun and verb pairs. However, this dataset contains only frequent words.
In addition, the distributed representation of words is generally learned using only word-level information. Consequently, the distributed representation for low-frequency words and unknown words cannot be learned well with conventional models. However, low-frequency words and unknown words are often comprise high-frequency morphemes (e.g., unkingly INLINEFORM0 un + king + ly). Some previous studies take advantage of the morphological information to provide a suitable representation for low-frequency words and unknown words BIBREF4 , BIBREF5 . Morphological information is particularly important for Japanese since Japanese is an agglutinative language.
Construction of a Japanese Word Similarity Dataset
What makes a pair of words similar? Most of the previous datasets do not concretely define the similarity of word pairs. The difference in the similarity of word pairs originates from each annotator's mind, resulting in different scales of a word. Thus, we propose to use an example-based approach (Table TABREF9 ) to control the variance of the similarity ratings. We remove the context of word when we extracted the word. So, we consider that an ambiguous word has high variance of the similarity, but we can get low variance of the similarity when the word is monosemous.
For this study, we constructed a Japanese word similarity dataset. We followed the procedure used to construct the Stanford Rare Word Similarity Dataset (RW) Luong-etal:conll13:morpho.
We extracted Japanese word pairs from the Evaluation Dataset of Japanese Lexical Simplification kodaira. It targeted content words (nouns, verbs, adjectives, adverbs). It included 10 contexts about target words annotated with their lexical substitutions and rankings. Figure FIGREF1 shows an example of the dataset. A word in square brackets in the text is represented as a target word of simplification. A target word is not only recorded in the lemma form but also in the conjugated form. We built a Japanese similarity dataset from this dataset using the following procedure.
Comparison to Other Datasets
Table TABREF17 shows how several resources vary. WordSim353 comprises high-frequency words and so the variance tends to be low. In contrast, RW includes low-frequency words, unknown words, and complex words composed of several morphemes; thus, the variance is large. VSD has many polysemous words, which increase the variance. Despite the fact that our dataset, similar to the VSD and RW datasets, contains low-frequency and ambiguous words, its variance is 3.00. The variance level is low compared with the other corpora. We considered that the examples of the similarity in the task request reduced the variance level.
We did not expect SCWS to have the largest variance in the datasets shown in Table TABREF17 because it gave the context to annotators during annotation. At the beginning, we thought the context would serve to remove the ambiguity and clarify the meaning of word; however after looking into the dataset, we determined that the construction procedure used several extraordinary annotators. It is crucial to filter insincere annotators and provide straightforward instructions to improve the quality of the similarity annotation like we did.
To gain better similarity, each dataset should utilize the reliability score to exclude extraordinary annotators. For example, for SCWS, an annotator rating the similarity of pair of “CD” and “aglow” assigned a rating of 10. We assumed it was a typo or misunderstanding regarding the words. To address this problem, such an annotation should be removed before calculating the true similarity. All the datasets except for RW simply calculated the average of the similarity, but datasets created using crowdsourcing should consider the reliability of the annotator.
Analysis
We present examples of a pair with high variance of similarity as shown below:
(e.g., a pairing of “fastUTF8min(速い)” and “earlyUTF8min(早い)”.)
Although they are similar in meaning with respect to the time, they have nothing in common with respect to speed; Annotator A assigned a rating of 10, but Annotator B assigned a rating of 1.
Another example, the pairing of “be eagerUTF8min(懇願する)” and “requestUTF8min(頼む)”. Even though the act indicated by the two verbs is the same, there are some cases where they express different degrees of feeling. Compared with “request”, “eager” indicates a stronger feeling. There were two annotators who emphasized the similarity of the act itself rather than the different degrees of feeling, and vice versa. In this case, Annotator A assigned a rating of 9, but Annotator B assigned a rating of 2.
Although it was necessary to distinguish similarity and semantic relatedness BIBREF7 and we asked annotators to rate the pairs based on semantic similarity, it was not straightforward to put paraphrase candidates onto a single scale considering all the attributes of the words. This limitation might be relaxed if we would ask annotators to refer to a thesaurus or an ontology such as Japanese Lexicon GoiTaikei:1997.
(e.g., a pairing of “sloganUTF8min(スローガン)” and “sloganUTF8min(標語)”.)
In Japanese, we can write a word using hiragana, katakana, or kanji characters; however because hiragana and katakana represent only the pronunciation of a word, annotators might think of different words. In this case, Annotator A assigned a rating of 8, but Annotator B assigned a rating of 0. Similarly, we confirmed the same thing in other parts of speech. Especially, nouns can have several word pairs with different spellings, which results in their IAA became too low compared to other parts of speech.
(e.g., a pairing of “oftenUTF8min(しばしば)” and “frequentlyUTF8min(しきりに)”.)
We confirmed that the variance becomes larger among adverbs expressing frequency. This is due to the difference in the frequency of words that annotators imagines. In this case, Annotator A assigned a rating of 9, but Annotator B assigned a rating of 0. Similarly, we confirmed the same thing among adverbs expressing time.
Conclusion
In this study, we constructed the first Japanese word similarity dataset. It contains various parts of speech and includes rare words in addition to common words. Crowdsourced annotators assigned similarity to word pairs during the word similarity task. We gave examples of similarity in the task request sent to annotators, so that we reduced the variance of each word pair. However, we did not restrict the attributes of words, such as the level of feeling, during annotation. Error analysis revealed that the notion of similarity should be carefully defined when constructing a similarity dataset.
As a future work, we plan to construct a word analogy dataset in Japanese by translating an English dataset to Japanese. We hope that a Japanese database will facilitate research in Japanese distributed representations.
Language Resource References
lrec lrec2018 | Yes |
4a2248e1c71c0b0183ab0d225440cae2da396b8d | 4a2248e1c71c0b0183ab0d225440cae2da396b8d_0 | Q: where does the data come from?
Text: Introduction
Traditionally, a word is represented as a sparse vector indicating the word itself (one-hot vector) or the context of the word (distributional vector). However, both the one-hot notation and distributional notation suffer from data sparseness since dimensions of the word vector do not interact with each other. Distributed word representation addresses the data sparseness problem by constructing a dense vector of a fixed length, wherein contexts are shared (or distributed) across dimensions. Distributed word representation is known to improve the performance of many NLP applications such as machine translation BIBREF0 and sentiment analysis BIBREF1 to name a few. The task to learn a distributed representation is called representation learning.
However, evaluating the quality of learned distributed word representation itself is not straightforward. In language modeling, perplexity or cross-entropy is widely accepted as a de facto standard for intrinsic evaluation. In contrast, distributed word representations include the additive (or compositional) property of the vectors, which cannot be assessed by perplexity. Moreover, perplexity makes little use of infrequent words; thus, it is not appropriate for evaluating distributed presentations that try to represent them.
Therefore, a word similarity task and/or a word analogy task are generally used to evaluate distributed word representations in the NLP literature. The former judges whether distributed word representations improve modeling contexts, and the latter estimates how well the learned representations achieve the additive property. However, such resources other than for English (e.g., Japanese) seldom exist. In addition, most of these datasets comprise high-frequency nouns so that they tend not to include other parts of speech. Hence, previous data fail to evaluate word representations of other parts of speech, including content words such as verbs and adjectives.
To address the problem of the lack of a dataset for evaluating Japanese distributed word representations, we propose to build a Japanese dataset for the word similarity task.
The main contributions of our work are as follows:
Related Work
In general, distributed word representations are evaluated using a word similarity task. For instance, WordSim353 2002:PSC:503104.503110, MC BIBREF2 , RG BIBREF3 , and SCWS Huang:2012:IWR:2390524.2390645 have been used to evaluate word similarities in English. Moreover, baker-reichart-korhonen:2014:EMNLP2014 built a verb similarity dataset (VSD) based on WordSim353 because there was no dataset of verbs in the word-similarity task. Recently, SimVerb-3500 was introduced to evaluate human understanding of verb meaning Gerz:2016:EMNLP. It provides human ratings for the similarity of 3,500 verb pairs so that it enables robust evaluation of distributed representation for verbs. However, most of these datasets include English words only. There has been no Japanese dataset for the word-similarity task.
Apart from English, WordSim353 and SimLex-999 Hill:2015:CL have been translated and rescored in other languages: German, Italian and Russian Leviant:2015:arXiv. SimLex-999 has also been translated and rescored in Hebrew and Croatian Mrksic:2017:TACL. SimLex-999 explicitly targets at similarity rather than relatedness and includes adjective, noun and verb pairs. However, this dataset contains only frequent words.
In addition, the distributed representation of words is generally learned using only word-level information. Consequently, the distributed representation for low-frequency words and unknown words cannot be learned well with conventional models. However, low-frequency words and unknown words are often comprise high-frequency morphemes (e.g., unkingly INLINEFORM0 un + king + ly). Some previous studies take advantage of the morphological information to provide a suitable representation for low-frequency words and unknown words BIBREF4 , BIBREF5 . Morphological information is particularly important for Japanese since Japanese is an agglutinative language.
Construction of a Japanese Word Similarity Dataset
What makes a pair of words similar? Most of the previous datasets do not concretely define the similarity of word pairs. The difference in the similarity of word pairs originates from each annotator's mind, resulting in different scales of a word. Thus, we propose to use an example-based approach (Table TABREF9 ) to control the variance of the similarity ratings. We remove the context of word when we extracted the word. So, we consider that an ambiguous word has high variance of the similarity, but we can get low variance of the similarity when the word is monosemous.
For this study, we constructed a Japanese word similarity dataset. We followed the procedure used to construct the Stanford Rare Word Similarity Dataset (RW) Luong-etal:conll13:morpho.
We extracted Japanese word pairs from the Evaluation Dataset of Japanese Lexical Simplification kodaira. It targeted content words (nouns, verbs, adjectives, adverbs). It included 10 contexts about target words annotated with their lexical substitutions and rankings. Figure FIGREF1 shows an example of the dataset. A word in square brackets in the text is represented as a target word of simplification. A target word is not only recorded in the lemma form but also in the conjugated form. We built a Japanese similarity dataset from this dataset using the following procedure.
Comparison to Other Datasets
Table TABREF17 shows how several resources vary. WordSim353 comprises high-frequency words and so the variance tends to be low. In contrast, RW includes low-frequency words, unknown words, and complex words composed of several morphemes; thus, the variance is large. VSD has many polysemous words, which increase the variance. Despite the fact that our dataset, similar to the VSD and RW datasets, contains low-frequency and ambiguous words, its variance is 3.00. The variance level is low compared with the other corpora. We considered that the examples of the similarity in the task request reduced the variance level.
We did not expect SCWS to have the largest variance in the datasets shown in Table TABREF17 because it gave the context to annotators during annotation. At the beginning, we thought the context would serve to remove the ambiguity and clarify the meaning of word; however after looking into the dataset, we determined that the construction procedure used several extraordinary annotators. It is crucial to filter insincere annotators and provide straightforward instructions to improve the quality of the similarity annotation like we did.
To gain better similarity, each dataset should utilize the reliability score to exclude extraordinary annotators. For example, for SCWS, an annotator rating the similarity of pair of “CD” and “aglow” assigned a rating of 10. We assumed it was a typo or misunderstanding regarding the words. To address this problem, such an annotation should be removed before calculating the true similarity. All the datasets except for RW simply calculated the average of the similarity, but datasets created using crowdsourcing should consider the reliability of the annotator.
Analysis
We present examples of a pair with high variance of similarity as shown below:
(e.g., a pairing of “fastUTF8min(速い)” and “earlyUTF8min(早い)”.)
Although they are similar in meaning with respect to the time, they have nothing in common with respect to speed; Annotator A assigned a rating of 10, but Annotator B assigned a rating of 1.
Another example, the pairing of “be eagerUTF8min(懇願する)” and “requestUTF8min(頼む)”. Even though the act indicated by the two verbs is the same, there are some cases where they express different degrees of feeling. Compared with “request”, “eager” indicates a stronger feeling. There were two annotators who emphasized the similarity of the act itself rather than the different degrees of feeling, and vice versa. In this case, Annotator A assigned a rating of 9, but Annotator B assigned a rating of 2.
Although it was necessary to distinguish similarity and semantic relatedness BIBREF7 and we asked annotators to rate the pairs based on semantic similarity, it was not straightforward to put paraphrase candidates onto a single scale considering all the attributes of the words. This limitation might be relaxed if we would ask annotators to refer to a thesaurus or an ontology such as Japanese Lexicon GoiTaikei:1997.
(e.g., a pairing of “sloganUTF8min(スローガン)” and “sloganUTF8min(標語)”.)
In Japanese, we can write a word using hiragana, katakana, or kanji characters; however because hiragana and katakana represent only the pronunciation of a word, annotators might think of different words. In this case, Annotator A assigned a rating of 8, but Annotator B assigned a rating of 0. Similarly, we confirmed the same thing in other parts of speech. Especially, nouns can have several word pairs with different spellings, which results in their IAA became too low compared to other parts of speech.
(e.g., a pairing of “oftenUTF8min(しばしば)” and “frequentlyUTF8min(しきりに)”.)
We confirmed that the variance becomes larger among adverbs expressing frequency. This is due to the difference in the frequency of words that annotators imagines. In this case, Annotator A assigned a rating of 9, but Annotator B assigned a rating of 0. Similarly, we confirmed the same thing among adverbs expressing time.
Conclusion
In this study, we constructed the first Japanese word similarity dataset. It contains various parts of speech and includes rare words in addition to common words. Crowdsourced annotators assigned similarity to word pairs during the word similarity task. We gave examples of similarity in the task request sent to annotators, so that we reduced the variance of each word pair. However, we did not restrict the attributes of words, such as the level of feeling, during annotation. Error analysis revealed that the notion of similarity should be carefully defined when constructing a similarity dataset.
As a future work, we plan to construct a word analogy dataset in Japanese by translating an English dataset to Japanese. We hope that a Japanese database will facilitate research in Japanese distributed representations.
Language Resource References
lrec lrec2018 | Evaluation Dataset of Japanese Lexical Simplification kodaira |
1244cf6d75e3aa6d605a0f4b141c015923a3f2e7 | 1244cf6d75e3aa6d605a0f4b141c015923a3f2e7_0 | Q: What is the criteria for a good metric?
Text: Introduction
Evaluation metrics play a central role in the machine learning community. They direct the efforts of the research community and are used to define the state of the art models. In machine translation and summarization, the two most common metrics used for evaluating similarity between candidate and reference texts are BLEU BIBREF0 and ROUGE BIBREF1. Both approaches rely on counting the matching n-grams in the candidates summary to n-grams in the reference text. BLEU is precision focused while ROUGE is recall focused. These metrics have posed serious limitations and have already been criticized by the academic community.In this work we formulate three criticisms of BLEU and ROUGE, establish criteria that a sound metric should have and propose concrete ways to use recent advances in NLP to design data-driven metric addressing the weaknesses found in BLEU and ROUGE.
Related Work ::: BLEU, ROUGE and n-gram matching approaches
BLEU (Bilingual Evaluation Understudy) BIBREF0 and ROUGE BIBREF1 have been used to evaluate many NLP tasks for almost two decades. The general acceptance of these methods depend on many factors including their simplicity and the intuitive interpretability. Yet the main factor is the claim that they highly correlate with human judgement BIBREF0. This has been criticised extensively by the literature and the shortcomings of these methods have been widely studied. Reiter BIBREF2 , in his structured review of BLEU, finds a low correlation between BLEU and human judgment. Callison et al BIBREF3 examines BLEU in the context of machine translation and find that BLEU does neither correlate with human judgment on adequacy(whether the hypothesis sentence adequately captures the meaning of the reference sentence) nor fluency(the quality of language in a sentence). Sulem et al BIBREF4 examines BLEU in the context of text simplification on grammaticality, meaning preservation and simplicity and report BLEU has very low or in some cases negative correlation with human judgment. Considering these results it is a natural step to pursue new avenues for natural language evaluation and with the advent of deep learning using neural networks for this task is a promising step forward.
Related Work ::: Transformers, BERT and GPT
Language modeling has become an important NLP technique thanks to the ability to apply it to various NLP tasks as explained in Radford et al BIBREF5. There are two leading architectures for language modeling Recurrent Neural Networks (RNNs)BIBREF6 and Transformers BIBREF7 . RNNs handle the input tokens, words or characters, one by one through time to learn the relationship between them, whereas, transformers receive a segment of tokens and learn the dependencies between them using an attention mechanism.
Related Work ::: Model-based metrics
While BLEU and ROUGE are defined in a discrete space new evaluation metric can be defined in this continuous space. BERTscore BIBREF8 uses word embeddings and cosine similarity to create a score array and use greedy matching to maximize the similarity score. Sentence Mover’s Similarity BIBREF9 uses the mover similarity, Wasserstein distance, between sentence embedding generated from averaging the word embeddings in a sentence. Both of these methods report stronger correlations with human judgment and better results when compared to BLEU and ROUGE. While they are using word embeddings BIBREF10 to transfer their sentence in a continuous space they are still using distance metrics to evaluate that sentence. While BLEND BIBREF11 uses an SVM to combine different existing evaluation metrics. One other evaluation method proposed is RUSE BIBREF12 this method proposes embedding both sentences separately and pooling them to a given size. After that they use a pre trained MLP to predict on different tasks. This quality estimator metric is then proposed to be used in language evaluation. Our proposed methodology is to take neural language evaluation beyond architecture specifications. We are proposing a framework in which an evaluators success can be determined.
Challenges with BLEU and ROUGE
In this part, we discuss three significant limitations of BLEU and ROUGE. These metrics can assign: High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries.
Challenges with BLEU and ROUGE ::: High score, opposite meanings
Suppose that we have a reference summary s1. By adding a few negation terms to s1, one can create a summary s2 which is semantically opposite to s1 but yet has a high BLEU/ROUGE score.
Challenges with BLEU and ROUGE ::: Low score, similar meanings
In addition not to be sensitive to negation, BLEU and ROUGE score can give low scores to sentences with equivalent meaning. If s2 is a paraphrase of s1, the meaning will be the same ;however, the overlap between words in s1 and s2 will not necessarily be significant.
Challenges with BLEU and ROUGE ::: High score, unintelligible sentences
A third weakness of BLEU and ROUGE is that in their simplest implementations, they are insensitive to word permutation and can give very high scores to unintelligible sentences. Let s1 be "On a morning, I saw a man running in the street." and s2 be “On morning a, I saw the running a man street”. s2 is not an intelligible sentence. The unigram version of ROUGE and BLEU will give these 2 sentences a score of 1.
Challenges with BLEU and ROUGE ::: Experiments ::: Experiments with carefully crafted sentences
To illustrate our argument, let's consider the following pairs of sentences:
In Pair 1: s1 is "For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown to be robust to criticism” s2 is "“For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown not to be robust to criticism”. They differ by adding the negation in s2.
In Pair 2: s1 is "On a morning, I saw a man running in the street." and s2 is "In the early hours of the day, I observed one gentleman jogging along the road”. s2 is a paraphrase of s1.
Challenges with BLEU and ROUGE ::: Experiments ::: Semantic similarity experiments
To go beyond carefully crafted sentences. We assessed how well BLEU and ROUGE correlated with human judgement of similarity between pairs of paraphrased sentences and compared their performance to a RoBERTa model finetuned for semantic similarity (Table 2).
Towards a robust data-driven approach ::: Metric Scorecard
In our methodology to design new evaluation metrics for comparing reference summaries/translations to hypothesis ones, we established first-principles criteria on what a good evaluator should do. The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences.
Towards a robust data-driven approach ::: Implementing metrics satisfying scorecard ::: Semantic Similarity
Starting from the RoBERTa large pre-trained model BIBREF13 , we finetune it to predict sentence similarity on the STS-B benchmark dataset. Given two sentences of text, s1 and s2, the systems need to compute how similar s1 and s2 are, returning a similarity score between 0 and 5. The dataset comprises naturally occurring pairs of sentences drawn from several domains and genres, annotated by crowdsourcing. The benchmark comprises 8628 sentence pairs with 5700 pairs in the training set, 1500 in the development set and 1379 in the test set.
Towards a robust data-driven approach ::: Implementing metrics satisfying scorecard ::: Logical Equivalence
For logical inference, we start with a pretrained RoBERTa BIBREF13 model and finetune it using the Multi-Genre Natural Language Inference Corpus (Williams et al., 2018). It is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither (neutral). The training set includes 393k sentence pairs, development set includes 20k and test set includes 20k. The accuracy of the pre-trained model on the development set is 0.9060.
Towards a robust data-driven approach ::: Implementing metrics satisfying scorecard ::: Sentence Intelligibility
We start with a pretrained roBERTa BIBREF13 model and finetune it using the Corpus of Linguistic Acceptability (CoLA) . It consists of examples of expert English sentence acceptability judgments drawn from 22 books. Each example is a single string of English words annotated with whether it is grammatically possible sentence of English. The training set for CoLA has 10k sentences and the development set includes 1k sentences. The current model gets 67.8 percent accuracy
Towards a robust data-driven approach ::: Implementing metrics satisfying scorecard ::: Rationale for Language Models
The overall rationale for using language models fine tuned for specific aspects of the scorecard is that recent work has shown that language models are unsupervised multitask learners BIBREF5 and can rediscover the classical NLP pipeline. By fine tuning them on a specific task, we make them pay attention to the correct level of abstraction corresponding to the scorecard.
Conclusion
In this work, we have shown three main limitations of BLEU and ROUGE and proposed a path forward outlining why and how state of the art language models can be used as summary evaluators. Future work includes extending the proposed scorecard, updating the models matching best the scorecard criteria and assessing published summarization models using that scorecard. | The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences. |
c8b9b962e4d40c50150b2f8873a4004f25398464 | c8b9b962e4d40c50150b2f8873a4004f25398464_0 | Q: What are the three limitations?
Text: Introduction
Evaluation metrics play a central role in the machine learning community. They direct the efforts of the research community and are used to define the state of the art models. In machine translation and summarization, the two most common metrics used for evaluating similarity between candidate and reference texts are BLEU BIBREF0 and ROUGE BIBREF1. Both approaches rely on counting the matching n-grams in the candidates summary to n-grams in the reference text. BLEU is precision focused while ROUGE is recall focused. These metrics have posed serious limitations and have already been criticized by the academic community.In this work we formulate three criticisms of BLEU and ROUGE, establish criteria that a sound metric should have and propose concrete ways to use recent advances in NLP to design data-driven metric addressing the weaknesses found in BLEU and ROUGE.
Related Work ::: BLEU, ROUGE and n-gram matching approaches
BLEU (Bilingual Evaluation Understudy) BIBREF0 and ROUGE BIBREF1 have been used to evaluate many NLP tasks for almost two decades. The general acceptance of these methods depend on many factors including their simplicity and the intuitive interpretability. Yet the main factor is the claim that they highly correlate with human judgement BIBREF0. This has been criticised extensively by the literature and the shortcomings of these methods have been widely studied. Reiter BIBREF2 , in his structured review of BLEU, finds a low correlation between BLEU and human judgment. Callison et al BIBREF3 examines BLEU in the context of machine translation and find that BLEU does neither correlate with human judgment on adequacy(whether the hypothesis sentence adequately captures the meaning of the reference sentence) nor fluency(the quality of language in a sentence). Sulem et al BIBREF4 examines BLEU in the context of text simplification on grammaticality, meaning preservation and simplicity and report BLEU has very low or in some cases negative correlation with human judgment. Considering these results it is a natural step to pursue new avenues for natural language evaluation and with the advent of deep learning using neural networks for this task is a promising step forward.
Related Work ::: Transformers, BERT and GPT
Language modeling has become an important NLP technique thanks to the ability to apply it to various NLP tasks as explained in Radford et al BIBREF5. There are two leading architectures for language modeling Recurrent Neural Networks (RNNs)BIBREF6 and Transformers BIBREF7 . RNNs handle the input tokens, words or characters, one by one through time to learn the relationship between them, whereas, transformers receive a segment of tokens and learn the dependencies between them using an attention mechanism.
Related Work ::: Model-based metrics
While BLEU and ROUGE are defined in a discrete space new evaluation metric can be defined in this continuous space. BERTscore BIBREF8 uses word embeddings and cosine similarity to create a score array and use greedy matching to maximize the similarity score. Sentence Mover’s Similarity BIBREF9 uses the mover similarity, Wasserstein distance, between sentence embedding generated from averaging the word embeddings in a sentence. Both of these methods report stronger correlations with human judgment and better results when compared to BLEU and ROUGE. While they are using word embeddings BIBREF10 to transfer their sentence in a continuous space they are still using distance metrics to evaluate that sentence. While BLEND BIBREF11 uses an SVM to combine different existing evaluation metrics. One other evaluation method proposed is RUSE BIBREF12 this method proposes embedding both sentences separately and pooling them to a given size. After that they use a pre trained MLP to predict on different tasks. This quality estimator metric is then proposed to be used in language evaluation. Our proposed methodology is to take neural language evaluation beyond architecture specifications. We are proposing a framework in which an evaluators success can be determined.
Challenges with BLEU and ROUGE
In this part, we discuss three significant limitations of BLEU and ROUGE. These metrics can assign: High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries.
Challenges with BLEU and ROUGE ::: High score, opposite meanings
Suppose that we have a reference summary s1. By adding a few negation terms to s1, one can create a summary s2 which is semantically opposite to s1 but yet has a high BLEU/ROUGE score.
Challenges with BLEU and ROUGE ::: Low score, similar meanings
In addition not to be sensitive to negation, BLEU and ROUGE score can give low scores to sentences with equivalent meaning. If s2 is a paraphrase of s1, the meaning will be the same ;however, the overlap between words in s1 and s2 will not necessarily be significant.
Challenges with BLEU and ROUGE ::: High score, unintelligible sentences
A third weakness of BLEU and ROUGE is that in their simplest implementations, they are insensitive to word permutation and can give very high scores to unintelligible sentences. Let s1 be "On a morning, I saw a man running in the street." and s2 be “On morning a, I saw the running a man street”. s2 is not an intelligible sentence. The unigram version of ROUGE and BLEU will give these 2 sentences a score of 1.
Challenges with BLEU and ROUGE ::: Experiments ::: Experiments with carefully crafted sentences
To illustrate our argument, let's consider the following pairs of sentences:
In Pair 1: s1 is "For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown to be robust to criticism” s2 is "“For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown not to be robust to criticism”. They differ by adding the negation in s2.
In Pair 2: s1 is "On a morning, I saw a man running in the street." and s2 is "In the early hours of the day, I observed one gentleman jogging along the road”. s2 is a paraphrase of s1.
Challenges with BLEU and ROUGE ::: Experiments ::: Semantic similarity experiments
To go beyond carefully crafted sentences. We assessed how well BLEU and ROUGE correlated with human judgement of similarity between pairs of paraphrased sentences and compared their performance to a RoBERTa model finetuned for semantic similarity (Table 2).
Towards a robust data-driven approach ::: Metric Scorecard
In our methodology to design new evaluation metrics for comparing reference summaries/translations to hypothesis ones, we established first-principles criteria on what a good evaluator should do. The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences.
Towards a robust data-driven approach ::: Implementing metrics satisfying scorecard ::: Semantic Similarity
Starting from the RoBERTa large pre-trained model BIBREF13 , we finetune it to predict sentence similarity on the STS-B benchmark dataset. Given two sentences of text, s1 and s2, the systems need to compute how similar s1 and s2 are, returning a similarity score between 0 and 5. The dataset comprises naturally occurring pairs of sentences drawn from several domains and genres, annotated by crowdsourcing. The benchmark comprises 8628 sentence pairs with 5700 pairs in the training set, 1500 in the development set and 1379 in the test set.
Towards a robust data-driven approach ::: Implementing metrics satisfying scorecard ::: Logical Equivalence
For logical inference, we start with a pretrained RoBERTa BIBREF13 model and finetune it using the Multi-Genre Natural Language Inference Corpus (Williams et al., 2018). It is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither (neutral). The training set includes 393k sentence pairs, development set includes 20k and test set includes 20k. The accuracy of the pre-trained model on the development set is 0.9060.
Towards a robust data-driven approach ::: Implementing metrics satisfying scorecard ::: Sentence Intelligibility
We start with a pretrained roBERTa BIBREF13 model and finetune it using the Corpus of Linguistic Acceptability (CoLA) . It consists of examples of expert English sentence acceptability judgments drawn from 22 books. Each example is a single string of English words annotated with whether it is grammatically possible sentence of English. The training set for CoLA has 10k sentences and the development set includes 1k sentences. The current model gets 67.8 percent accuracy
Towards a robust data-driven approach ::: Implementing metrics satisfying scorecard ::: Rationale for Language Models
The overall rationale for using language models fine tuned for specific aspects of the scorecard is that recent work has shown that language models are unsupervised multitask learners BIBREF5 and can rediscover the classical NLP pipeline. By fine tuning them on a specific task, we make them pay attention to the correct level of abstraction corresponding to the scorecard.
Conclusion
In this work, we have shown three main limitations of BLEU and ROUGE and proposed a path forward outlining why and how state of the art language models can be used as summary evaluators. Future work includes extending the proposed scorecard, updating the models matching best the scorecard criteria and assessing published summarization models using that scorecard. | High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries. |
616c205142c7f37b3f4e81c0d1c52c79f926bcdc | 616c205142c7f37b3f4e81c0d1c52c79f926bcdc_0 | Q: What is current state-of-the-art model?
Text: Introduction
In a task-oriented dialogue system, the dialogue policy determines the next action to perform and next utterance to say based on the current dialogue state. A dialogue state defined by frame-and-slot semantics is a set of (key, value) pairs specified by the domain ontology BIBREF0. A key is a (domain, slot) pair and a value is a slot value provided by the user. Figure FIGREF1 shows a dialogue and state in three domain contexts. Dialogue state tracking (DST) in multiple domains is a challenging problem. First of all, in production environments, the domain ontology is being continuously updated such that the model must generalize to new values, new slots, or even new domains during inference. Second, the number of slots and values in the training data are usually quite large. For example, the MultiWOZ $2.0/2.1$ datasets BIBREF1, BIBREF2 have 30 (domain, slot) pairs and more than $4,500$ values BIBREF3. As the model must understand slot and value paraphrases, it is infeasible to train each slot or value independently. Third, multi-turn inferences are often required as shown in the underlined areas of Figure FIGREF1.
Many single-domain DST algorithms have been proposed BIBREF4, BIBREF5, BIBREF6. For example, BIBREF6 learns a local model for each slot and a global model shared by all slots. However, single domain models are difficult to scale to multi-domain settings, leading to the development of multi-domain DST algorithms. For example, BIBREF7 improves BIBREF6's work by removing local models and building a slot-conditioned global model to share parameters between domains and slots, thus computing a score for every (domain, slot, value) tuple. This approach remains problematic for settings with a large value set (e.g., user phone number). BIBREF3 proposes an encoder-decoder architecture which takes dialogue contexts as source sentences and state annotations as target sentences, but does not explicitly use relationships between domains and slots. For example, if a user booked a restaurant and asks for a taxi, then the destination of the taxi is likely to be that restaurant, and if a user booked a 5 star hotel, then the user is likely looking for an expensive rather than a cheap restaurant. As we will show later, such relationships between domains and slots help improve model performance.
To tackle these challenges, we propose DSTQA (Dialogue State Tracking via Question Answering), a new multi-domain DST model inspired by recently developed reading comprehension and question answering models. Our model reads dialogue contexts to answer a series of questions that asks for the value of a (domain, slot) pair. Specifically, we construct two types of questions: 1) multiple choice questions for (domain, slot) pairs with a limited number of value options and 2) span prediction questions, of which the answers are spans in the contexts, designed for (domain, slot) pairs that have a large or infinite number of value options. Finally, we represent (domain, slot) pairs as a dynamically-evolving knowledge graph with respect to the dialogue context, and utilize this graph to drive improved model performance. Our contributions are as follows: (1) we propose to model multi-domain DST as a question answering problem such that tracking new domains, new slots and new values is simply constructing new questions, (2) we propose using a bidirectional attention BIBREF8 based model for multi-domain dialogue state tracking, and (3) we extend our algorithm with a dynamically-evolving knowledge graph to further exploit the structure between domains and slots.
Problem Formulation
In a multi-domain dialogue state tracking problem, there are $M$ domains $D=\lbrace d_1, d_2, ..., d_M\rbrace $. For example, in MultiWOZ 2.0/2.1 datasets, there are 7 domains: restaurant, hotel, train, attraction, taxi, hospital, and police. Each domain $d \in D$ has $N^d$ slots $S^d = \lbrace s^d_1, s^d_2, ...,s^d_{N^d}\rbrace $, and each slot $s \in S^d$ has $K^s$ possible values $V^s=\lbrace v^s_1, v^s_2, ...,v^s_{K^s}\rbrace $. For example, the restaurant domain has a slot named price range, and the possible values are cheap, moderate, and expensive. Some slots do not have pre-defined values, that is, $V^s$ is missing in the domain ontology. For example, the taxi domain has a slot named leave time, but it is a poor choice to enumerate all the possible leave times the user may request as the size of $V^s$ will be very large. Meanwhile, the domain ontology can also change over time. Formally, we represent a dialogue $X$ as $X=\lbrace U^a_1, U^u_1, U^a_2, U^u_2, ..., U^a_T, U^u_T\rbrace $, where $U^a_t$ is the agent utterance in turn $t$ and $U^u_t$ is the user utterance in turn $t$. Each turn $t$ is associated with a dialogue state $\text{y}_t$. A dialogue state $\text{y}_t$ is a set of (domain, slot, value) tuples. Each tuple represents that, up to the current turn $t$, a slot $s \in S^d$ of domain $d \in D$, which takes the value $v \in V^s$ has been provided by the user. Accordingly, $\text{y}_t$'s are targets that the model needs to predict.
Multi-domain Dialogue State Tracking via Question Answering (DSTQA)
We model multi-domain DST as a question answering problem and use machine reading methods to provide answers. To predict the dialogue state at turn $t$, the model observes the context $C_t$, which is the concatenation of $\lbrace U_1^a, U_1^u, ..., U_t^a, U_t^u\rbrace $. The context is read by the model to answer the questions defined as follows. First, for each domain $d \in D$ and each slot $s \in S^d$ where there exists a pre-defined value set $V^s$, we construct a question $Q_{d,s} = \lbrace d, s, V^s, \text{{\tt not mentioned}}, \text{{\tt don^{\prime }t care}}\rbrace $. That is, a question is a set of words or phrases which includes a domain name, a slot name, a list of all possible values, and two special values not mentioned and don't care. One example of the constructed question for restaurant domain and price range slot is $Q_{d,s} = \lbrace \text{{\em restaurant}}, \text{{\em price range}}, \text{{\tt cheap}}, \text{{\tt moderate}}, \text{{\tt expensive}}, \text{{\tt not mentioned}}, \text{{\tt don^{\prime }t care}} \rbrace $. The constructed question represents the following natural language question:
“In the dialogue up to turn $t$, did the user mention the `price range' of the `restaurant' he/she is looking for? If so, which of the following option is correct: A) cheap, B) moderate, C) expensive, D) don't care.”
As we can see from the above example, instead of only using domains and slots to construct questions (corresponding to natural language questions what is the value of this slot?), we also add candidate values $V^s$ into $Q_{d,s}$, this is because values can be viewed as descriptions or complimentary information to domains and slots. For example, cheap, moderate and expensive explains what price range is. In this way, the constructed question $Q_{d,s}$ contains rich information about the domains and slots to predict, and easy to generalize to new values.
In the case that $V^s$ is not available, the question is just the domain and slot names along with the special values, that is, $Q_{d,s} = \lbrace d, s, \text{{\tt not mentioned}}, \text{{\tt don^{\prime }t care}}\rbrace $. For example, the constructed question for train domain and leave time slot is $Q_{d,s} = \lbrace \text{{\em train}}, \text{{\em leave time}}, \text{{\tt not mentioned}}, \text{{\tt don^{\prime }t care}}\rbrace $, and represents the following natural language question:
“In the dialogue up to turn $t$, did the user mention the `leave time' of the `train' he/she is looking for? If so, what is the `leave time' the user preferred?”
The most important concept to note here is that the proposed DSTQA model can be easily extended to new domains, slots, and values. Tracking new domains and slots is simply constructing new queries, and tracking new values is simply extending the constructed question of an existing slot.
Although we formulate multi-domain dialogue state tracking as a question answering problem, we want to emphasize that there are some fundamental differences between these two settings. In a standard question answering problem, question understanding is a major challenge and the questions are highly dependent on the context where questions are often of many different forms BIBREF9. Meanwhile, in our formulation, the question forms are limited to two, every turn results in asking a restricted set of question types, and thus question understanding is straightforward. Conversely, our formulation has its own complicating characteristics including: (1) questions in consecutive turns tend to have the same answers, (2) an answer is either a span of the context or a value from a value set, and (3) the questions we constructed have some underlying connections defined by a dynamically-evolving knowledge graph (described in Section SECREF4), which can help improve model performance. In any case, modeling multi-domain DST with this approach allows us to easily transfer knowledge to new domains, slots, and values simply by constructing new questions. Accordingly, many existing reading comprehension algorithms BIBREF8, BIBREF10, BIBREF11, BIBREF12 can be directly applied here. In this paper, we propose a bidirectional attention flow BIBREF8 based model for multi-domain DST.
Multi-domain Dialogue State Tracking via Question Answering (DSTQA) ::: Model Overview
Figure FIGREF3 summarizes the DSTQA architecture, where notable subcomponents are detailed below.
1. Word Embedding Layer: For each word in context $C_t$, similar to BIBREF8, we apply a character embedding layer based on convolutional neural network to get a $D^{\text{Char}}$ dimensional character-level embedding. We then adopt ELMo BIBREF13, a deep contextualized word representations, to get a $D^{\text{ELMo}}$ dimensional word-level embedding. Other contextualized word embeddings such as BERT BIBREF11 can also be applied here but is orthogonal to DSTQA and is left for future work. The final word embedding of context $C_t$ is the concatenation of the character-level embedding and the ELMo embedding, and is denoted by $W^c \in \mathbb {R}^{L_c \times D^w}$, where $L_c$ is the number of words in context $C_t$ and $D^w = D^{\text{ELMo}} + D^{\text{Char}}$. Similarly, For a question $Q_{d, s}$, we treat each element in $Q_{d,s}$ (either a domain name, a slot name, or a value from the value set) as a sentence and compute its word embedding. We then take the mean of the word embeddings in each element as the embedding of that element. Then the question embedding is represented by a set $\lbrace w^{d} \in \mathbb {R}^{D^w}, w^{s} \in \mathbb {R}^{D^w}, W^{\bar{v}} \in \mathbb {R}^{L_{\bar{v}} \times D^w}\rbrace $, where $w^d$, $w^s$ and $W^{\bar{v}}$ are domain, slot and value embeddings, respectively, and $L_{\bar{v}}$ is the number of values in $V^s$ plus not mentioned and don't care. To represent the question embedding as one single matrix, we define $W^q \in \mathbb {R}^{L_{\bar{v}} \times D^w}$, where each row of $W^q$ is calculauted by $W^q_{j,:} = w^d + w^s + W^{\bar{v}}_{j,:}$.
2. Context Encoding Layer: We apply a bidirectional GRU to encode the context $C_t$. Denoting the $i$-th word in the context $C_t$ by $w_i$, then the input to the bidirectional GRU at time step $i$ is the concatenation of the following three vectors: 1) $w_i$'s word embeddings, $W^c_{i, :}$, 2) the corresponding role embedding, and 3) exact match features. There are two role embeddings: the agent role embedding $e_a \in \mathbb {R}^r$ and the user role embedding $e_u \in \mathbb {R}^r$ where both are trainable. Exact match features are binary indicator features where for each (domain, slot) pair, we search for occurrences of its values in the context in original and lemmatized forms. Then for each (domain, slot) pair, we use two binary features to indicate whether $w_i$ belongs to an occurrence in either form. The final output of this layer is a matrix $E^c \in \mathbb {R}^{L_c \times D^\text{biGRU}}$, where $L_c$ is the number of words in the context $C_t$ and $D^\text{biGRU}$ is the dimension of bidirectional GRU's hidden states (includes both forward and backward hidden states). In our experiments, we set $D^\text{biGRU}$ equals to $D^w$.
3. Question-Context Bidirectional Attention Layer: Inspired by BIBREF8, we apply a bidirectional attention layer which computes attention in two directions: from context $C_t$ to question $Q_{d,s}$, and from question $Q_{d,s}$ to context $C_t$. To do so, we first define an attention function $\mathbb {R}^{m*n} \times \mathbb {R}^n \rightarrow \mathbb {R}^m$ that will be used frequently in the following sections. The inputs to the function are a key matrix $K \in \mathbb {R}^{m * n}$ and a query vector $q \in \mathbb {R}^{n}$. The function calculates the attention score of $q$ over each row of $K$. Let $O \in \mathbb {R}^{m*n}$ be a matrix which is $q$ repeated by $m$ times, that is, $O_{:,j} = q$ for all $j$. Then, the attention function is defined as:
Where $\beta \in \mathbb {R}^{3 n}$ are learned model parameters, $\odot $ is the element-wise multiplication operator, and $[;]$ is matrix row concatenation operator. We use subscript of $\beta $, $\beta _i$, to indicate different instantiations of the attention function.
The attention score of a context word $w_i$ to values in $Q_{d,s}$ is given by $\alpha ^{v}_i = \text{Att}_{\beta _1}(W^q, E^c_{i,:}) \in \mathbb {R}^{L_{\bar{v}}}$, and the attention score of a value $v_j$ to context words in $C_t$ is given by $\alpha ^{w}_j = \text{Att}_{\beta _1}(E^c, W^q_{j, :}) \in \mathbb {R}^{L_{c}}$. $\beta _1$ is shared between these two attention functions. Then, the question-dependent embedding of context word $w_i$ is $B^{QD}_i = {W^q}^\top \cdot \alpha ^{v}_i$ and can be viewed as the representation of $w_i$ in the vector space defined by the question $Q_{d,s}$. Similarly, the context-dependent embedding for value $v_j$ is $B^{CD}_j = {E^c}^\top \cdot \alpha ^{w}_j$ and can be viewed as the representation of $v_j$ in the vector space defined by the context $C_t$. The final context embedding is $B^c = E^c + B^{QD} \in \mathbb {R}^{L_c \times D^w}$ and the final question embedding is $B^q = B^{CD} + W^q \in \mathbb {R}^{L_{\bar{v}} \times D^w}$.
4. Value Prediction Layer: When $V^s$ exists in $Q_{d,s}$, we calculate a score for each value in $Q_{d,s}$, and select the one with the highest score as the answer. First, we define a bilinear function $\mathbb {R}^{m*n} \times \mathbb {R}^n \rightarrow \mathbb {R}^m$. It takes a matrix $X \in \mathbb {R}^{m*n}$ and a vector $y \in \mathbb {R}^n$, returning a vector of length $m$, BiLinear(X, y) = Xy where $\Phi \in \mathbb {R}^{n*n}$ are learned model parameters. Again, we use subscript of $\Phi $, $\Phi _i$, to indicate different instantiations of the function.
We summarize context $B^c$ into a single vector with respect to the domain and slot and then apply a bilinear function to calculate the score of each value. More specifically, We calculate the score of each value $v$ at turn $t$ by pvt = Softmax(BiLinear1(Bq, Bcb)) where $\alpha ^b = \text{Att}_{\beta _2}(B^c, w^d + w^s) \in \mathbb {R}^{L_c}$ is the attention score over $B^c$, and $p^v_t \in \mathbb {R}^{L_{\bar{v}}}$. We calculate the cross entropy loss of the predicted scores by $\text{Loss}_v = \sum _t \sum _{d \in D,s \in \hat{S}^d}\text{CrossEntropy}\left(p_t^v, y_t^v\right)$ where $y^v_t \in \mathbb {R}^{L_{\bar{v}}}$ is the label, which is the one-hot encoding of the true value of domain $d$ and slot $s$, and $\hat{S}^d$ is the set of slots in domain $d$ that has pre-defined $V^s$.
5. Span Prediction Layer: When the value set $V^s$ is unknown or too large to enumerate, such as pick up time in taxi domain, we predict the answer to a question $Q_{d,s}$ as either a span in the context or two special types: not mentioned and don't care. The span prediction layer has two components. The first component predicts the answer type of $Q_{d,s}$. The type of the answer is either not mentioned, don't care or span, and is calculated by $ p^{st}_t = \text{Softmax}(\Theta _1 \cdot (w^d + w^s + {E^c}^\top \cdot \alpha ^e ) ) $ where $\alpha ^e = \text{Att}_{\beta _3}(E^c, w^d + w^s) \in \mathbb {R}^{L_c}$, $\Theta _1 \in \mathbb {R}^{3 * D^w}$ is a model parameter to learn, and $p^{st}_t \in \mathbb {R}^3$. The loss of span type prediction is $ \text{Loss}_{st} = \sum _t \sum _{d \in D, s\in \bar{S}^d} \text{CrossEntropy}\left(p^{st}_t, y^{st}_t\right) $ where $y^{st}_t \in \mathbb {R}^3$ is the one-hot encoding of the true span type label, and $\bar{S}^d$ is the set of slots in domain $d$ that has no pre-defined $V^s$. The second component predicts a span in the context corresponding to the answer of $Q_{d,s}$. To get the probability distribution of a span's start index, we apply a bilinear function between contexts and (domain, slot) pairs. More specifically, psst = Softmax(BiLinear2( Relu(Ec 2), (wd + ws + Ece ) )) where $\Theta _2 \in \mathbb {R}^{D^w * D^w}$ and $p_t^{ss} \in \mathbb {R}^{L_c}$. The $\text{Bilinear}$ function's first argument is a non-linear transformation of the context embedding, and its second argument is a context-dependent (domain, slot) pair embedding. Similarly, the probability distribution of a span's end index is
where $\Theta _3 \in \mathbb {R}^{D^w * D^w}$ and $p_t^{se} \in \mathbb {R}^{L_c}$. The prediction loss is $\text{Loss}_{span} = \sum _t \sum _{d \in D, s\in \bar{S}^d} \text{CrossEntropy}(p^{ss}_t, y^{ss}_t) + \text{CrossEntropy}(p^{se}_t, y^{se}_t)$ where $y^{ss}_t, y^{se}_t \in \mathbb {R}^{L_c}$ is one-hot encodings of true start and end indices, respectively. The score of a span is the multiplication of probabilities of its start and end index. The final loss function is: $ \text{Loss} = \text{Loss}_v + \text{Loss}_{st} + \text{Loss}_{span} $. In most publicly available dialogue state tracking datasets, span start and end labels do not exist. In Section SECREF11 we will show how we construct these labels.
Dynamic Knowledge Graph for Multi-domain dialogue State Tracking
In our problem formulation, at each turn, our proposed algorithm asks a set of questions, one for each (domain, slot) pair. In fact, the (domain, slot) pairs are not independent. For example, if a user requested a train for 3 people, then the number of people for hotel reservation may also be 3. If a user booked a restaurant, then the destination of the taxi is likely to be that restaurant. Specifically, we observe four types of relationships between (domain, slot) pairs in MultiWOZ $2.0$/$2.1$ dataset:
$(s, r_v, s^{\prime })$: a slot $s \in S^d$ and another slot $s^{\prime } \in S^{d^{\prime }}$ have the same set of possible values. That is, $V^s$ equals to $V^{s^{\prime }}$. For example, in MultiWOZ $2.0$/$2.1$ dataset, domain-slot pairs (restaurant, book day) and (hotel, book day) have this relationship.
$(s, r_s, s^{\prime })$: the value set of a slot $s \in S^d$ is a subset of the value set of $s^{\prime } \in S^{d^{\prime }}$. For example, in MultiWOZ $2.0$/$2.1$ dataset, value sets of (restaurant, name), (hotel, name), (train, station) and (attraction, name) are subsets of the value set of (taxi, destination).
$(s, r_c, s^{\prime })$: the informed value $v \in V^s$ of slot $s$ is correlated with the informed value $v \in V^{s^{\prime }}$ of slot $s^{\prime }$ even though $V^s$ and $V^{s^{\prime }}$ do not overlap. For example, in MultiWOZ $2.0$/$2.1$ dataset, the price range of a reserved restaurant is correlated with the star of the booked hotel. This relationship is not explicitly given in the ontology.
$(s, r_i, v)$: the user has informed value $v \in V^s$ of slot $s \in S^d$.
In this section, we propose using a dynamic knowledge graph to further improve model performance by exploiting this information. We represent (domain, slot) pairs and values as nodes in a graph linked by the relationship defined above, and then propagate information between them. The graph is dynamically evolving, since the fourth relationship above, $r_i$, depends on the dialogue context.
Dynamic Knowledge Graph for Multi-domain dialogue State Tracking ::: Graph Definition
The right-hand side of Figure FIGREF3 is an example of the graph we defined based on the ontology. There are two types of nodes $\lbrace M, N\rbrace $ in the graph. One is a (domain, slot) pair node representing a (domain, slot) pair in the ontology and another is a value node representing a value from a value set. For a domain $d \in D$ and a slot $s \in S^d$, we denote the corresponding node by $M_{d,s}$, and for a value $v \in V^s$, we denote the corresponding node by $N_{v}$. There are also two types of edges. One type is the links between $M$ and $N$. At each turn $t$, if the answer to question $Q_{d, s}$ is $v \in V^s$, then $N_v$ is added to the graph and linked to $M_{d,s}$. By default, $M_{d, s}$ is linked to a special not mentioned node. The other type of edges is links between nodes in $M$. Ideally we want to link nodes in $M$ based on the first three relationships described above. However, while $r_v$ and $r_s$ are known given the ontology, $r_c$ is unknown and cannot be inferred just based on the ontology. As a result, we connect every node in $M$ (i.e. the (domain, slot) pair nodes) with each other, and let the model to learn their relationships with an attention mechanism, which will be described shortly.
Dynamic Knowledge Graph for Multi-domain dialogue State Tracking ::: Attention Over the Graph
We use an attention mechanism to calculate the importance of a node's neighbors to that node, and then aggregate node embeddings based on attention scores. BIBREF14 describes a graph attention network, which performs self-attention over nodes. In contrast with their work, we use dialogue contexts to attend over nodes.
Our attention mechanism has two steps. The first step is to propagate the embedding of $N_v$ to its linked $M_{d,s}$, so that the embedding of $M_{d,s}$ depends on the value prediction from previous turns. We propagate $N_v$'s embedding by $g_{d,s} = \eta (w^d + w^s) + (1-\eta ) (\Theta _4 \cdot W_{v,:}^{\bar{v}})$ where $g_{d,s} \in \mathbb {R}^{D^w}$ is the new embedding of $M_{d,s}$, $\eta \in [0, 1]$ is a hyper-parameter, and $\Theta _4 \in \mathbb {R}^{D^w \times D^w}$ is a model parameter to learn. $g_{d,s}$ essentially carries the following information: in previous turns, the user has mentioned value $v$ of a slot $s$ from a domain $d$. In practice, we find out that simply adding $w^d$, $w^s$ and $W^{\bar{v}}$ yields the best result. That is $g_{d,s} = w^d + w^s + W_{v,:}^{\bar{v}}$. The second step is to propagate information between nodes in $M$. For each domain $d$ and slot $s$, ${B^c}^\top \cdot \alpha ^b$ in Equation (SECREF2) is the summarized context embedding with respect to $d$ and $s$. We use this vector to attend over all nodes in $M$, and the attention score is $\alpha ^g = \text{Att}_{\beta _4}(G, {B^c}^\top \cdot \alpha ^b)$, where $G \in \mathbb {R}^{|M| * D^w}$ is a matrix stacked by $g_{d,s}^\top $. The attention scores can be interpreted as the learned relationships between the current (domain, slot) node and all other (domain, slot) nodes. Using context embeddings to attend over the graph allows the model to assign attention score of each node based on dialogue contexts. Finally, The graph embedding is $z_{d,s} = G \cdot \alpha ^g$. We inject $z_{d,s}$ to Equation ($\ref {eq:vscore}$) with a gating mechanism:
where $\gamma =({B^c}^\top \cdot \alpha ^b + z_{d,s})$ is the gate and controls how much graph information should flow to the context embedding given the dialogue context. Some utterances such as “book a taxi to Cambridge station" do not need information in the graph, while some utterances such as “book a taxi from the hotel to the restaurant" needs information from other domains. $\gamma $ dynamically controls in what degree the graph embedding is used. and graph parameters are trained together with all other parameters.
Experiments
We evaluate our model on three publicly available datasets: (non-multi-domain) WOZ $2.0$ BIBREF4, MultiWOZ $2.0$ BIBREF1, and MultiWOZ $2.1$ BIBREF2. Due to limited space, please refer to Appendix SECREF23 for results on (non-multi-domain) WOZ $2.0$ dataset. MultiWOZ $2.0$ dataset is collected from a Wizard of Oz style experiment and has 7 domains: restaurant, hotel, train, attraction, taxi, hospital, and police. Similar to BIBREF3, we ignore the hospital and police domains because they only appear in training set. There are 30 (domain, slot) pairs and a total of 10438 task-oriented dialogues. A dialogue may span across multiple domains. For example, during the conversation, a user may book a restaurant first, and then book a taxi to that restaurant. For both datasets, we use the train/test splits provided by the dataset. The domain ontology of the datasets is described in Appendix SECREF25. MultiWOZ $2.1$ contains the same dialogues and ontology as MultiWOZ $2.0$, but fixes some annotation errors in MultiWOZ $2.0$.
Two common metrics to evaluate dialogue state tracking performance are Joint accruacy and Slot accuracy. Joint accuracy is the accuracy of dialogue states. A dialogue state is correctly predicted only if all the values of (domain, slot) pairs are correctly predicted. Slot accuracy is the accuracy of (domain, slot, value) tuples. A tuple is correctly predicted only if the value of the (domain, slot) pair is correctly predicted. In most literature, joint accuracy is considered as a more challenging and more important metric.
Experiments ::: Implementation Details
Existing dialogue state tracking datasets, such as MultiWOZ $2.0$ and MultiWOZ 2.1, do not have annotated span labels but only have annotated value labels for slots. As a result, we preprocess MultiWOZ $2.0$ and MultiWOZ $2.1$ dataset to convert value labels to span labels: we take a value label in the annotation, and search for its last occurrence in the dialogue context, and use that occurrence as span start and end labels. There are 30 slots in MultiWOZ $2.0$/$2.1$ dataset, and 5 of them are time related slots such as restaurant book time and train arrive by, and the values are 24-hour clock time such as 08:15. We do span prediction for these 5 slots and do value prediction for the rest of slots because it is not practical to enumerate all time values. We can also do span prediction for other slots such as restaurant name and hotel name with the benefit of handling out-of-vocabulary values, but we leave these experiments as future work. WOZ $2.0$ dataset only has one domain and 3 slots, and we do value prediction for all these slots without graph embeddings.
We implement our model using AllenNLP BIBREF15 framework. For experiments with ELMo embeddings, we use a pre-trained ELMo model in which the output size is $D^{ELMo} = 512$. The dimension of character-level embeddings is $D^{Char} = 100$, making $D^w = 612$. ELMo embeddings are fixed during training. For experiments with GloVe embeddings, we use GloVe embeddings pre-trained on Common Crawl dataset. The dimension of GloVe embeddings is 300, and the dimension of character-level embeddings is 100, such that $D^w = 400$. GloVe embeddings are trainable during training. The size of the role embedding is 128. The dropout rate is set to $0.5$. We use Adam as the optimizer and the learning rate is set to $0.001$. We also apply word dropout that randomly drop out words in dialogue context with probability $0.1$.
When training DSTQA with the dynamic knowledge graph, in order to predict the dialogue state and calculate the loss at turn $t$, we use the model with current parameters to predict the dialogue state up until turn $t-1$, and dynamically construct a graph for turn $t$. We have also tried to do teacher forcing which constructs the graph with ground truth labels (or sample ground truth labels with an annealed probability), but we observe a negative impact on joint accuracy. On the other hand, target network BIBREF16 may be useful here and will be investigated in the future. More specifically, we can have a copy of the model that update periodically, and use this model copy to predict dialogue state up until turn $t-1$ and construct the graph.
Experiments ::: Results on MultiWoz 2.0 and MultiWOZ 2.1 dataset.
We first evaluate our model on MultiWOZ 2.0 dataset as shown in Table TABREF16. We compare with five published baselines. TRADE BIBREF3 is the current published state-of-the-art model. It utilizes an encoder-decoder architecture that takes dialogue contexts as source sentences, and takes state annotations as target sentences. SUMBT BIBREF17 fine-tunes a pre-trained BERT model BIBREF11 to learn slot and utterance representations. Neural Reading BIBREF18 learns a question embedding for each slot, and predicts the span of each slot value. GCE BIBREF7 is a model improved over GLAD BIBREF6 by using a slot-conditioned global module. Details about baselines are in Section SECREF6.
For our model, we report results under two settings. In the DSTQA w/span setting, we do span prediction for the five time related slots as mentioned in Section SECREF11. This is the most realistic setting as enumerating all possible time values is not practical in a production environment. In the DSTQA w/o span setting, we do value prediction for all slots, including the five time related slots. To do this, we collect all time values appeared in the training data to create a value list for time related slots as is done in baseline models. It works in these two datasets because there are only 173 time values in the training data, and only 14 out-of-vocabulary time values in the test data. Note that in all our baselines, values appeared in the training data are either added to the vocabulary or added to the domain ontology, so DSTQA w/o span is still a fair comparison with the baseline methods. Our model outperforms all models. DSTQA w/span has a $5.64\%$ relative improvement and a $2.74\%$ absolute improvement over TRADE. We also show the performance on each single domain in Appendix SECREF27. DSTQA w/o span has a $5.80\%$ relative improvement and a $2.82\%$ absolute improvement over TRADE. We can see that DSTQA w/o span performs better than DSTQA w/span, this is mainly because we introduce noises when constructing the span labels, meanwhile, span prediction cannot take the benefit of the bidirectional attention mechanism. However, DSTQA w/o span cannot handle out-of-vocabulary values, but can generalize to new values only by expanding the value sets, moreover, the performance of DSTQA w/o span may decrease when the size of value sets increases.
Table TABREF17 shows the results on MultiWOZ $2.1$ dataset. Compared with TRADE, DSTQA w/span has a $8.93\%$ relative improvement and a $4.07\%$ absolute improvement. DSTQA w/o span has a $12.21\%$ relative improvement and a $5.57\%$ absolute improvement. More baselines can be found at the leaderboard. Our model outperforms all models on the leaderboard at the time of submission of this paper.
Ablation Study: Table TABREF16 also shows the results of ablation study of DSTQA w/span on MultiWOZ $2.0$ dataset. The first experiment completely removes the graph component, and the joint accuracy drops $0.47\%$. The second experiment keeps the graph component but removes the gating mechanism, which is equivalent to setting $\gamma $ in Equation (DISPLAY_FORM10) to $0.5$, and the joint accuracy drops $0.98\%$, demonstrating that the gating mechanism is important when injecting graph embeddings and simply adding the graph embeddings to context embeddings can negatively impact the performance. In the third experiment, we replace $B_i^{QD}$ with the mean of query word embeddings and replace $B_j^{CD}$ with the mean of context word embeddings. This is equivalent to setting the bi-directional attention scores uniformly. The joint accuracy significantly drops $1.62\%$. The fourth experiment completely removes the bi-directional attention layer, and the joint accuracy drops $1.85\%$. Both experiments show that bidirectional attention layer has a notably positive impact on model performance. The fifth experiment substitute ELMo embeddings with GloVe embeddings to demonstrate the benefit of using contextual word embeddings. We plan to try other state-of-the-art contextual word embeddings such as BERT BIBREF11 in the future. We further show the model performance on different context lengths in Appendix SECREF30.
Experiments ::: Generalization to New Domains
Table TABREF20 shows the model performance on new domains. We take one domain in MultiWOZ $2.0$ as the target domain, and the remaining 4 domains as source domains. Models are trained either from scratch using only $5\%$ or $10\%$ sampled data from the target domain, or first trained on the 4 source domains and then fine-tuned on the target domain with sampled data. In general, a model that achieves higher accuracy by fine-tuning is more desirable, as it indicates that the model can quickly adapt to new domains given limited data from the new domain. In this experiment, we compare DSTQA w/span with TRADE. As shown in Table TABREF20, DSTQA consistently outperforms TRADE when fine-tuning on $5\%$ and $10\%$ new domain data. With $5\%$ new domain data, DSTQA fine-tuning has an average of $43.32\%$ relative improvement over DSTQA training from scratch, while TRADE fine-tuning only has an average of $19.99\%$ relative improvement over TRADE training from scratch. DSTQA w/ graph also demonstrates its benefit over DSTQA w/o graph, especially on the taxi domain. This is because the `taxi' domain is usually mentioned at the latter part of the dialogue, and the destination and departure of the taxi are usually the restaurant, hotel, or attraction mentioned in the previous turns and are embedded in the graph.
Experiments ::: Error Analysis
Figure FIGREF22 shows the different types of model prediction errors on MultiWOZ $2.1$ dataset made by DSTQA w/span as analyzed by the authors. Appendix SECREF34 explains the meaning of each error type and also list examples for each error type. At first glance, annotation errors and annotation disagreements account for $56\%$ of total prediction errors, and are all due to noise in the dataset and thus unavoidable. Annotation errors are the most frequent errors and account for $28\%$ of total prediction errors. Annotation errors means that the model predictions are incorrect only because the corresponding ground truth labels in the dataset are wrong. Usually this happens when the annotators neglect the value informed by the user. Annotator disagreement on user confirmation accounts for $28\%$ ($15\% + 13\%$) of total errors. This type of errors comes from the disagreement between annotators when generating ground truth labels. All these errors are due to the noise in the dataset and unavoidable, which also explains why the task on MultiWOZ $2.1$ dataset is challenging and the state-of-the-art joint accuracy is less than $50\%$.
Values exactly matched but not recognized ($10\%$) and paraphrases not recognized ($14\%$) mean that the user mentions a value or a paraphrase of a value, but the model fails to recognize it. Multi-turn inferences failed ($6\%$) means that the model fails to refer to previous utterances when making prediction. User responses not understood ($8\%$) and implications not understood ($3\%$) mean that the model does not understand what the user says and fails to predict based on user responses. Finally, incorrect value references ($2\%$) means that there are multiple values of a slot in the context and the model refers to an incorrect one, and incorrect domain references ($1\%$) means that the predicted slot and value should belong to another domain. All these errors indicate insufficient understanding of agent and user utterances. A more powerful language model and a coreference resolution modules may help mitigate these problems. Please refer to Appendix SECREF34 for examples.
Related Works
Our work is most closely related to previous works in dialogue state tracking and question answering. Early models of dialogue state tracking BIBREF19, BIBREF20, BIBREF21 rely on handcrafted features to extract utterance semantics, and then use these features to predict dialogue states. Recently BIBREF4 propose to use convolutional neural network to learn utterance $n$-gram representation, and achieve better performance than handcrafted features-based model. However, their model maintains a separate set of parameters for each slot and does not scale well. Models that handles scalable multi-domain DST have then been proposed BIBREF22, BIBREF23. BIBREF6 and BIBREF7 propose a global-local architecture. The global module is shared by all slots to transfer knowledge between them. BIBREF5 propose to share all parameters between slots and fix the word embeddings during training, so that they can handle new slots and values during inference. However, These models do not scale when the sizes of value sets are large or infinite, because they have to evaluate every (domain, slot, tuple) during the training. BIBREF24 propose to use a pointer network with a Seq2Seq architecture to handle unseen slot values. BIBREF17 encode slots and utterances with a pre-trained BERT model, and then use a slot utterance matching module, which is a multi-head attention layer, to compute the similarity between slot values and utterances. BIBREF25 release a schema-guided DST dataset which contains natural language description of domains and slots. They also propose to use BERT to encode these natural language description as embeddings of domains and slots. BIBREF3 propose to use an encoder-decoder architecture with a pointer network. The source sentences are dialogue contexts and the target sentences are annotated value labels. The model shares parameters across domains and does not require pre-defined domain ontology, so it can adapt to unseen domains, slots and values. Our work differs in that we formulate multi-domain DST as a question answering problem and use reading comprehension methods to provide answers. There have already been a few recent works focusing on using reading comprehension models for dialogue state tracking. For example, BIBREF26 formulate slot tracking as four different types of questions (Factoid, Yes/No, Indefinite knowledge, Counting and Lists/Sets), and use memory network to do reasoning and to predict answers. BIBREF18 construct a question for each slot, which basically asks what is the value of slot i, then they predict the span of the value/answer in the dialogue history. Our model is different from these two models in question representation. We not only use domains and slots but also use lists of candidate values to construct questions. Values can be viewed as descriptions to domains and slots, so that the questions we formulate have richer information about domains and slots, and can better generalize to new domains, slots, and values. Moreover, our model can do both span and value prediction, depending on whether the corresponding value lists exists or not. Finally, our model uses a dynamically-involving knowledge graph to explicitly capture interactions between domains and slots.
In a reading comprehension BIBREF27 task, there is one or more context paragraphs and a set of questions. The task is to answer questions based on the context paragraphs. Usually, an answer is a text span in a context paragraph. Many reading comprehension models have been proposed BIBREF8, BIBREF10, BIBREF11, BIBREF12, BIBREF28. These models encode questions and contexts with multiple layers of attention-based blocks and predict answer spans based on the learned question and context embeddings. Some works also explore to further improve model performance by knowledge graph. For example BIBREF29 propose to build a heterogeneous graph in which the nodes are knowledge base entities and context paragraphs, and nodes are linked by entity relationships and entity mentions in the contexts. BIBREF30 propose to use Open IE to extract relation triples from context paragraphs and build a contextual knowledge graph with respect to the question and context paragraphs. We would expect many of these technical innovations to apply given our QA-based formulation.
Conclusion
In this paper, we model multi-domain dialogue state tracking as question answering with a dynamically-evolving knowledge graph. Such formulation enables the model to generalize to new domains, slots and values by simply constructing new questions. Our model achieves state-of-the-art results on MultiWOZ 2.0 and MultiWOZ 2.1 dataset with a $5.80\%$ and a $12.21\%$ relative improvement, respectively. Also, our domain expansion experiments show that our model can better adapt to unseen domains, slots and values compared with the previous state-of-the-art model.
Appendix ::: Results on WOZ 2.0 dataset
We also evaluate our algorithm on WOZ $2.0$ dataset BIBREF4
WOZ $2.0$ dataset has 1200 restaurant domain task-oriented dialogues. There are three slots: `food', `area', `price range', and a total of 91 slot values. The dialogues are collected from a Wizard of Oz style experiment, in which the task is to find a restaurant that matches the slot values the user has specified. Each turn of a dialogue is annotated with a dialogue state, which indicates the slot values the user has informed. One example of the dialogue state is {`food:Mexican', `area':`east', price range:`moderate'}.
Table TABREF24 shows the results on WOZ $2.0$ dataset. We compare with four published baselines. SUMBT BIBREF17 is the current state-of-the-art model on WOZ 2.0 dataset. It fine-tunes a pre-trained BERT model BIBREF11 to learn slot and utterance representations. StateNet PSI BIBREF5 maps contextualized slot embeddings and value embeddings into the same vector space, and calculate the Euclidean distance between these two. It also learns a joint model of all slots, enabling parameter sharing between slots. GLAD BIBREF6 proposes to use a global module to share parameters between slots and a local module to learn slot-specific features. Neural Beflief Tracker BIBREF4 applies CNN to learn n-gram utterance representations. Unlike prior works that transfer knowledge between slots by sharing parameters, our model implicitly transfers knowledge by formulating each slot as a question and learning to answer all the questions. Our model has a $1.24\%$ relative joint accuracy improvement over StateNet PSI. Although SUMBT achieves higher joint accuracy than DSTQA on WOZ $2.0$ dataset, DSTQA achieves better performance than SUMBT on MultiWOZ $2.0$ dataset, which is a more challenging dataset.
Appendix ::: MultiWOZ 2.0/2.1 Ontology
The ontology of MultiWOZ $2.0$ and MultiWOZ $2.1$ datasets is shown in Table TABREF26. There are 5 domains and 30 slots in total. (two other domains `hospital' and `police' are ignored as they only exists in training set.)
Appendix ::: Performance on Each Individual Domain
We show the performance of DSTQA w/span and TRADE on each single domain. We follow the same procedure as BIBREF3 to construct training and test dataset for each domain: a dialogue is excluded from a domain's training and test datasets if it does not mention any slots from that domain. During the training, slots from other domains are ignored. Table TABREF28 shows the results. We can see that our model achieves better results on every domain, especially the hotel domain, which has a $11.24\%$ relative improvement. Hotel is the hardest domain as it has the most slots (10 slots) and has the lowest joint accuracy among all domains.
Appendix ::: Joint Accuracy v.s. Context Length
We further show the model performance on different context lengths. Context lengths means the number of previous turns included in the dialogue context. Note that our baseline algorithms either use all previous turns as contexts to predict belief states or accumulate turn-level states of all previous turns to generate belief states. The results are shown in Figure FIGREF29. We can see that DSTQA with graph outperforms DSTQA without graph. This is especially true when the context length is short. This is because when the context length is short, graph carries information over multiple turns which can be used for multi-turn inference. This is especially useful when we want a shorter context length to reduce computational cost. In this experiment, the DSTQA model we use is DSTQA w/span.
Appendix ::: Accuracy per Slot
The accuracy of each slot on MultiWOZ $2.0$ and MultiWOZ $2.1$ test set is shown in Figure FIGREF32 and Figure FIGREF33, respectively. Named related slots such as restaurant name, attraction name, hotel name has high error rate, because these slots have very large value set and high annotation errors.
Appendix ::: Examples of Prediction Errors
This section describes prediciton errors made by DSTQA w/span. Incorrectly predicted (domain, slot, value) tuples are marked by underlines. 1. Annotation errors
Description: The groud truth label in the dataset is wrong. This can happen either 1) annotators neglect slots mentioned in the user utterance 2) annotators mistakenly choose the wrong label of a slot.
Examples:
User: I would like to find a museum in the west to go to. Agent: There are several museums in the west. I recommend the Cafe Jello Gallery. User: Can I have the address of the Cafe Jello museum? Agent: The Cafe Jello Gallery is at 13 Magdalene street. Is there anything else? User: Is there a moderately priced British restaurant any where in town? Annotation: {(restaurant, food, British), (restaurant, price range, moderate), (restaurant, area, west)} Prediction: {(restaurant, food, Biritsh), (restaurant, price range, moderate), (restaurant, area, don't care)}
2. Annotator disagreement on user confirmation (labeled)
Description: This type of errors comes from the disagreement between annotators when generating ground truth labels. More specifically, in a dialogue, the agent sometimes proposes a suggestion (a value of a slot) to the user, followed by the user's positive confirmation. For example, the agent says `I would recommend Little Seoul. Would you like to make a reservation?'. The user confirms with `yes, please'. Since the user positively confirms the agent’s suggestion, the (domain, slot, value) tuple mentioned by the agent, or, (restaurant, name, Little Seoul) tuple in this example, can be added into the belief state. However, based on our observation of the MultiWOZ 2.0 and MultiWOZ 2.1 dataset, the annotators are inconsistent, and only about half of the times these tuples are added to the belief states. An error of this type comes from the scenario that the tuple is added to the belief state by the annotator but not by the model (i.e. the model predicts ‘None’ for the corresponding (domain, slot) pair).
Examples:
User: I would like to find an architectural attraction to go to in the centre. Can you help me? Agent: Yes! I found 5 attractions matching your requirements . Would you like to know more? User: I would like to get the entrance fee and address of them please. Agent: Great Saint Mary's Church is 2 pounds and the other 4 are free. Would you like the address for Great Saint Mary's church? User: Yes, that would be great. Annotation: {(attraction, type, architecture), (attraction, area, centre), (attraction, name, Great Saint Mary's Church)} Prediction: {(attraction, type, architecture), (attraction, area, centre)} 3. Annotator disagreement on user confirmation (unlabeled)
Description: Similar to the last type of errors, this one comes from the scenario that the tuple is added to the belief state by the model but not by the annotator.
Examples:
Agent: There are many great restaurants in the centre. Is there a certain type of food you would like to have? User: What do you have that is expensive? Agent: Yes, there are many. Would you be interested in trying British cuisine or African? User: British cuisine is fine. Can you suggest a restaurant? Agent: Fitzbillies restaurant meets all of your requirements. Would you like more info on them? User: Book this for 6 people on Monday at 12:30. Annotation: {(restaurant, area, centre), (restaurant, food, British), (restaurant, price range, expensive), (restaurant, time, 12:30), (restaurant, book people, 6), (restaurant, day, Monday)} Prediction: {(restaurant, area, centre), (restaurant, food, British), (restaurant, price range, expensive), (restaurant, time, 12:30), (restaurant, book people, 6), (restaurant, day, Monday), (restaurant, name, Fitzbillies restaurant)}
4. Paraphrases not recognized
Description: The paraphrases of a value is not recognized by the model.
Example:
User: Can you help me find a place to go in the centre? Agent: I can help you with that. Is there a certain kind of attraction that you would like to visit? User: Surprise me. Give me the postcode as well. Annotation: {(attraction, area, centre), (attraction, area, don't care)} Prediction: {(attraction, area, centre)}
5. Value exactly matched but not recognized
Description: The value of a slot is mentioned and exactly matched in the user's utterance, but the model fails recognize and predict it.
Examples:
Agent: I am sorry, there is no restaurant serving specifically North American or American food in my database, is there another type of food you would consider? User: How about Modern European food? Agent: There are 3 Modern European restaurants. Two in the centre and one in the south. Do you have a preference? User: I would prefer the one on the centre, could I have the phone number and postcode please? Annotation: {(restaurant, food, Modern European), (restaurant, area, centre)} Prediction: {(restaurant, food, Modern European)}
6. User responses misunderstood
Description: The model misunderstands the user's intention and fails to predict based on the user utterance.
Examples:
User: I could use some help finding a restaurant that is moderately priced. Agent: We have many options that are moderately priced. Is there a specific area or type of cuisine you are looking for? User: I do not care about the cuisine but I want it to be in the west. Agent: We have Prezzo. It is an Italian restaurant located in the west. it is moderately priced. Would you like me to book it for you? User: That will not be necessary. What is the postcode? Agent: Prezzo's postcode is cb30ad. Annotation: {(restaurant, price range, moderate), (restaurant, area, west)} Prediction: {(restaurant, price range, moderate), (restaurant, area, west), (restaurant, name, Prezzo)}
7. Multi-turn inference failed
Description: In this scenario, it requires information from multiple turns to predict the value of a slot, but the model fails to perform multi-turn inference.
Example:
User: Hello, may I have a list of museums in the west? Agent: There are 7: Cafe Jello Gallery, Cambridge and County Folk Museum, ... User: Please give me the entrance fee and postcode of County Folk Museum Agent: The entrance fee is $3.50$ pounds and the postcode is cb30aq. Would you like any other information? User: I need a place to eat near the museum. I do not want to spend much so it should be cheap. what do you have? Annotation: {(attraction, area, west), (attraction, type, museum), (attraction, name, Cambridge and County Folk Museum), (restaurant, price range, cheap), (restaurant, area, centre)} Prediction: {(attraction, area, west), (attraction, type, museum), (attraction, name, Cambridge and County Folk Museum), (restaurant, price range, cheap)}
8. Implication not understood
Description: Implication expressed by the user is not understood by the model.
Examples:
User: I am trying to find a train leaving after 14:45 that's heading out from London Liverpool street. What do you have? Agent: There are 45 trains that fit your criteria. Please clarify your destination, day of travel and the time you want to arrive by so that i can narrow it down. User: I need a train to Cambridge on Tuesday. Agent: I have 5 departures fitting your criteria on the :39 of the hour from 15:39 to 23:39. Would you like me to book any of these for you ? User: Yes please do book the 15:39. Annotation: {(train, leaveat, 14:45), (train, departure, London Liverpool street), (train, destination, Cambridge), (train, day, Tuesday), (train, book people, 1)} Prediction: {(train, leaveat, 14:45), (train, departure, London Liverpool street), (train, destination, Cambridge), (train, day, Tuesday)}
9. Incorrect value reference
Description: There are multiple values of a slot in the context and the model refers to an incorrect one. This usually happens in time-related slots such as train departure time.
Examples:
User: I need to travel on Saturday from Cambridge to London Kings Cross and need to leave after 18:30. Agent: Train tr0427 leaves at 19:00 on Saturday and will get you there by 19:51. the cost is 18.88 pounds. Want me to book it? User: Yes, please book the train for 1 person and provide the reference number. Annotation: {(train, departure, Cambridge), (train, destination, London King Cross), (train, day, Saturday), (train, book people, 1), (train, leaveat, 18:30)} Prediction: {(train, departure, Cambridge), (train, destination, London King Cross), (train, day, Saturday), (train, book people, 1), (train, leaveat, 19:00)}
10. Incorrect domain reference
Description: The predicted slot and value should belong to another domain. This happens because many slots exists in multiple domains.
Example:
User: I am looking for information on Cambridge University Botanic Gardens. Agent: They are on Bateman st., postal code cb21jf. They can be reached at 01223336265, the entrance fee is 4 pounds. Can I help with anything else? User: Yes, can you help me find a restaurant? Agent: The botanic gardens are in the centre . Would you like the restaurant to also be in the centre? do you have any type of cuisine in mind? User: never mind, i will worry about food later. I am actually looking for a hotel with a guesthouse and free parking would be great as well. Agent: There are 21 guesthouses with free parking, do you have a price or area preference? User: cheap and in the south please . Annotation: {(hotel, area, south), (hotel, parking, yes), (hotel, price range, cheap), (hotel, type, guesthouse)} Prediction: {(hotel, area, south), (hotel, parking, yes), (hotel, price range, cheap), (hotel, type, guesthouse), (restaurant, price range, cheap)} | SUMBT BIBREF17 is the current state-of-the-art model on WOZ 2.0, TRADE BIBREF3 is the current published state-of-the-art model |
496e81769a8d9992dae187ed60639ff2eec531f3 | 496e81769a8d9992dae187ed60639ff2eec531f3_0 | Q: Which language(s) are found in the WSD datasets?
Text: Introduction
Word sense disambiguation (WSD) automatically assigns a pre-defined sense to a word in a text. Different senses of a word reflect different meanings a word has in different contexts. Identifying the correct word sense given a context is crucial in natural language processing (NLP). Unfortunately, while it is easy for a human to infer the correct sense of a word given a context, it is a challenge for NLP systems. As such, WSD is an important task and it has been shown that WSD helps downstream NLP tasks, such as machine translation BIBREF0 and information retrieval BIBREF1.
A WSD system assigns a sense to a word by taking into account its context, comprising the other words in the sentence. This can be done through discrete word features, which typically involve surrounding words and collocations trained using a classifier BIBREF2, BIBREF3, BIBREF4, BIBREF5. The classifier can also make use of continuous word representations of the surrounding words BIBREF6, BIBREF7. Neural WSD systems BIBREF8, BIBREF9 feed the continuous word representations into a neural network that captures the whole sentence and the word representation in the sentence. However, in both approaches, the word representations are independent of the context.
Recently, pre-trained contextualized word representations BIBREF10, BIBREF11, BIBREF12, BIBREF13 have been shown to improve downstream NLP tasks. Pre-trained contextualized word representations are obtained through neural sentence encoders trained on a huge amount of raw texts. When the resulting sentence encoder is fine-tuned on the downstream task, such as question answering, named entity recognition, and sentiment analysis, with much smaller annotated training data, it has been shown that the trained model, with the pre-trained sentence encoder component, achieves new state-of-the-art results on those tasks.
While demonstrating superior performance in downstream NLP tasks, pre-trained contextualized word representations are still reported to give lower accuracy compared to approaches that use non-contextualized word representations BIBREF10, BIBREF12 when evaluated on WSD. This seems counter-intuitive, as a neural sentence encoder better captures the surrounding context that serves as an important cue to disambiguate words. In this paper, we explore different strategies of integrating pre-trained contextualized word representations for WSD. Our best strategy outperforms prior methods of incorporating pre-trained contextualized word representations and achieves new state-of-the-art accuracy on multiple benchmark WSD datasets.
The following sections are organized as follows. Section SECREF2 presents related work. Section SECREF3 describes our pre-trained contextualized word representation. Section SECREF4 proposes different strategies to incorporate the contextualized word representation for WSD. Section SECREF5 describes our experimental setup. Section SECREF6 presents the experimental results. Section SECREF7 discusses the findings from the experiments. Finally, Section SECREF8 presents the conclusion.
Related Work
Continuous word representations in real-valued vectors, or commonly known as word embeddings, have been shown to help improve NLP performance. Initially, exploiting continuous representations was achieved by adding real-valued vectors as classification features BIBREF14. BIBREF6 fine-tuned non-contextualized word embeddings by a feed-forward neural network such that those word embeddings were more suited for WSD. The fine-tuned embeddings were incorporated into an SVM classifier. BIBREF7 explored different strategies of incorporating word embeddings and found that their best strategy involved exponential decay that decreased the contribution of surrounding word features as their distances to the target word increased.
The neural sequence tagging approach has also been explored for WSD. BIBREF8 proposed bidirectional long short-term memory (LSTM) BIBREF15 for WSD. They concatenated the hidden states of the forward and backward LSTMs and fed the concatenation into an affine transformation followed by softmax normalization, similar to the approach to incorporate a bidirectional LSTM adopted in sequence labeling tasks such as part-of-speech tagging and named entity recognition BIBREF16. BIBREF9 proposed a self-attention layer on top of the concatenated bidirectional LSTM hidden states for WSD and introduced multi-task learning with part-of-speech tagging and semantic labeling as auxiliary tasks. However, on average across the test sets, their approach did not outperform SVM with word embedding features. Subsequently, BIBREF17 proposed the incorporation of glosses from WordNet in a bidirectional LSTM for WSD, and reported better results than both SVM and prior bidirectional LSTM models.
A neural language model (LM) is aimed at predicting a word given its surrounding context. As such, the resulting hidden representation vector captures the context of a word in a sentence. BIBREF10 designed context2vec, which is a one-layer bidirectional LSTM trained to maximize the similarity between the hidden state representation of the LSTM and the target word embedding. BIBREF12 designed ELMo, which is a two-layer bidirectional LSTM language model trained to predict the next word in the forward LSTM and the previous word in the backward LSTM. In both models, WSD was evaluated by nearest neighbor matching between the test and training instance representations. However, despite training on a huge amount of raw texts, the resulting accuracies were still lower than those achieved by WSD approaches with pre-trained non-contextualized word representations.
End-to-end neural machine translation (NMT) BIBREF18, BIBREF19 learns to generate an output sequence given an input sequence, using an encoder-decoder model. The encoder captures the contextualized representation of the words in the input sentence for the decoder to generate the output sentence. Following this intuition, BIBREF11 trained an encoder-decoder model on parallel texts and obtained pre-trained contextualized word representations from the encoder.
Pre-Trained Contextualized Word Representation
The contextualized word representation that we use is BERT BIBREF13, which is a bidirectional transformer encoder model BIBREF20 pre-trained on billions of words of texts. There are two tasks on which the model is trained, i.e., masked word and next sentence prediction. In both tasks, prediction accuracy is determined by the ability of the model to understand the context.
A transformer encoder computes the representation of each word through an attention mechanism with respect to the surrounding words. Given a sentence $x^n_1$ of length $n$, the transformer computes the representation of each word $x_i$ through a multi-head attention mechanism, where the query vector is from $x_i$ and the key-value vector pairs are from the surrounding words $x_{i^{\prime }}$ ($1 \le i^{\prime } \le n$). The word representation produced by the transformer captures the contextual information of a word.
The attention mechanism can be viewed as mapping a query vector $\mathbf {q}$ and a set of key-value vector pairs $(\mathbf {k}, \mathbf {v})$ to an output vector. The attention function $A(\cdot )$ computes the output vector which is the weighted sum of the value vectors and is defined as:
where $\mathbf {K}$ and $\mathbf {V}$ are matrices, containing the key vectors and the value vectors of the words in the sentence respectively, and $\alpha (\mathbf {q}, \mathbf {k}, \rho )$ is a scalar attention weight between $\mathbf {q}$ and $\mathbf {k}$, re-scaled by a scalar $\rho $.
Two building blocks for the transformer encoder are the multi-head attention mechanism and the position-wise feed-forward neural network (FFNN). The multi-head attention mechanism with $H$ heads leverages the attention function in Equation DISPLAY_FORM1 as follows:
where $\oplus $ denotes concatenation of vectors, $\mathbf {W}_\text{MH} \in \mathbb {R}^{d_\text{model} \times Hd_\mathbf {v}}$, $\mathbf {W}^\mathbf {Q}_\eta , \mathbf {W}^\mathbf {K}_\eta \in \mathbb {R}^{d_\mathbf {k} \times d_\text{model}}$, and $ \mathbf {W}^\mathbf {V}_\eta \in \mathbb {R}^{d_\mathbf {v} \times d_\text{model}}$. The input vector $\mathbf {q} \in \mathbb {R}^{d_\text{model}}$ is the hidden vector for the ambiguous word, while input matrices $\mathbf {K}, \mathbf {V} \in \mathbb {R}^{d_\text{model} \times n}$ are the concatenation of the hidden vectors of all words in the sentence. For each attention head, the dimension of both the query and key vectors is $d_\mathbf {k}$ while the dimension of the value vector is $d_\mathbf {v}$. The encoder model dimension is $d_\text{model}$.
The position-wise FFNN performs a non-linear transformation on the attention output corresponding to each input word position as follows:
in which the input vector $\mathbf {u} \in \mathbb {R}^{d_\text{model}}$ is transformed to the output vector with dimension $d_\text{model}$ via a series of linear projections with the ReLU activation function.
For the hidden layer $l$ ($1 \le l \le L$), the self-attention sub-layer output $\mathbf {f}^l_i$ is computed as follows:
where LayerNorm refers to layer normalization BIBREF21 and the superscript $l$ and subscript $\mathbf {h}$ indicate that each encoder layer $l$ has an independent set of multi-head attention weight parameters (see Equations DISPLAY_FORM2 and ). The input for the first layer is $\mathbf {h}^0_i = \mathbf {E}(x_i)$, which is the non-contextualized word embedding of $x_i$.
The second sub-layer consists of the position-wise fully connected FFNN, computed as:
where, similar to self-attention, an independent set of weight parameters in Equation DISPLAY_FORM3 is defined in each layer.
Incorporating Pre-Trained Contextualized Word Representation
As BERT is trained on the masked word prediction task, which is to predict a word given the surrounding (left and right) context, the pre-trained model already captures the context. In this section, we describe different techniques of leveraging BERT for WSD, broadly categorized into nearest neighbor matching and linear projection of hidden layers.
Incorporating Pre-Trained Contextualized Word Representation ::: Nearest Neighbor Matching
A straightforward way to disambiguate word sense is through 1-nearest neighbor matching. We compute the contextualized representation of each word in the training data and the test data through BERT. Given a hidden representation $\mathbf {h}^L_{i}$ at the $L$-th layer for word $x_i$ in the test data, nearest neighbor matching finds a vector $\mathbf {h^*}$ in the $L$-th layer from the training data such that
where the sense assigned to $x_i$ is the sense of the word whose contextualized representation is $\mathbf {h^*}$. This WSD technique is adopted in earlier work on contextualized word representations BIBREF10, BIBREF12.
Incorporating Pre-Trained Contextualized Word Representation ::: Linear Projection of Hidden Layers
Apart from nearest neighbor matching, we can perform a linear projection of the hidden vector $\mathbf {h}_i$ by an affine transformation into an output sense vector, with its dimension equal to the number of senses for word $x_i$. The output of this affine transformation is normalized by softmax such that all its values sum to 1. Therefore, the predicted sense $\mathbf {s}_i$ of word $x_i$ is formulated as
where $\mathbf {s}_i$ is a vector of predicted sense distribution for word $x_i$, while $\mathbf {W}^{\text{lexelt}(x_i)}$ and $\mathbf {b}^{\text{lexelt}(x_i)}$ are respectively the projection matrix and bias vector specific to the lexical element (lexelt) of word $x_i$, which consists of its lemma and optionally its part-of-speech tag. We choose the sense corresponding to the element of $\mathbf {s}_i$ with the maximum value.
Training the linear projection model is done by the back-propagation algorithm, which updates the model parameters to minimize a cost function. Our cost function is the negative log-likelihood of the softmax output value that corresponds to the tagged sense in the training data. In addition, we propose two novel ways of incorporating BERT's hidden representation vectors to compute the sense output vector, which are described in the following sub-subsections.
Incorporating Pre-Trained Contextualized Word Representation ::: Linear Projection of Hidden Layers ::: Last Layer Projection
Similar to the nearest neighbor matching model, we can feed the hidden vector of BERT in the last layer, $\mathbf {h}^L_i$, into an affine transformation followed by softmax. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is instantiated by $\mathbf {h}^L_i$. The last layer projection model is illustrated in Figure FIGREF7(a).
Incorporating Pre-Trained Contextualized Word Representation ::: Linear Projection of Hidden Layers ::: Weighted Sum of Hidden Layers
BERT consists of multiple layers stacked one after another. Each layer carries a different representation of word context. Taking into account different hidden layers may help the WSD system to learn from different context information encoded in different layers of BERT.
To take into account all layers, we compute the weighted sum of all hidden layers, $\mathbf {h}^l_i$, where $1 \le l \le L$, corresponding to a word position $i$, through attention mechanism. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is replaced by the following equation:
where $\mathbf {m} \in \mathbb {R}^{d_\text{model}}$ is a projection vector to obtain scalar values with the key vectors. The model with weighted sum of all hidden layers is illustrated in Figure FIGREF7(b).
Incorporating Pre-Trained Contextualized Word Representation ::: Linear Projection of Hidden Layers ::: Gated Linear Unit
While the contextualized representations in the BERT hidden layer vectors are features that determine the word sense, some features are more useful than the others. As such, we propose filtering the vector values by a gating vector whose values range from 0 to 1. This mechanism is known as the gated linear unit (GLU) BIBREF22, which is formulated as
where $\mathbf {W}^\mathbf {h}$ and $\mathbf {W}^\mathbf {g}$ are separate projection matrices and $\mathbf {b}^\mathbf {h}$ and $\mathbf {b}^\mathbf {g}$ are separate bias vectors. The symbols $\sigma (\cdot )$ and $\odot $ denote the sigmoid function and element-wise vector multiplication operation respectively.
GLU transforms the input vector $\mathbf {h}$ by feeding it to two separate affine transformations. The second transformation is used as the sigmoid gate to filter the input vector, which is summed with the vector after the first affine transformation.
Experimental Setup
We conduct experiments on various WSD tasks. The description and the statistics for each task are given in Table . For English, a lexical element (lexelt) is defined as a combination of lemma and part-of-speech tag, while for Chinese, it is simply the lemma, following the OntoNotes setup.
We exploit English BERT$_\text{BASE}$ for the English tasks and Chinese BERT for the Chinese task. We conduct experiments with different strategies of incorporating BERT as described in Section SECREF4, namely 1-nearest neighbor matching (1-nn) and linear projection. In the latter technique, we explore strategies including simple last layer projection, layer weighting (LW), and gated linear unit (GLU).
In the linear projection model, we train the model by the Adam algorithm BIBREF23 with a learning rate of $10^{-3}$. The model parameters are updated per mini-batch of 16 sentences. As update progresses, we pick the best model parameters from a series of neural network updates based on accuracy on a held-out development set, disjoint from the training set.
The state-of-the-art supervised WSD approach takes into account features from the neighboring sentences, typically one sentence to the left and one to the right apart from the current sentence that contains the ambiguous words. We also exploit this in our model, as BERT supports inputs with multiple sentences separated by a special [SEP] symbol.
For English all-words WSD, we train our WSD model on SemCor BIBREF24, and test it on Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15). This common benchmark, which has been annotated with WordNet-3.0 senses BIBREF25, has recently been adopted in English all-words WSD. Following BIBREF9, we choose SemEval 2007 Task 17 (SE07) as our development data to pick the best model parameters after a number of neural network updates, for models that require back-propagation training.
We also evaluate on Senseval-2 and Senseval-3 English lexical sample tasks, which come with pre-defined training and test data. For each word type, we pick 20% of the training instances to be our development set and keep the remaining 80% as the actual training data. Through this development set, we determine the number of epochs to use in training. We then re-train the model with the whole training dataset using the number of epochs identified in the initial training step.
While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language. We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese. We follow the data setup of BIBREF26 and conduct an evaluation on four genres, i.e., broadcast conversation (BC), broadcast news (BN), magazine (MZ), and newswire (NW), as well as the concatenation of all genres. While the training and development datasets are divided into genres, we train on the concatenation of all genres and test on each individual genre.
For Chinese WSD evaluation, we train IMS BIBREF5 on the Chinese OntoNotes dataset as our baseline. We also incorporate pre-trained non-contextualized Chinese word embeddings as IMS features BIBREF6, BIBREF7. The pre-trained word embeddings are obtained by training the word2vec skip-gram model on Chinese Gigaword Fifth Edition, which after automatic word segmentation contains approximately 2 billion words. Following BIBREF6, we incorporate the embedding features of words within a window surrounding the target ambiguous word. In our experiments, we take into account 5 words to the left and right.
Results
We present our experimental results and compare them with prior baselines.
Results ::: English All-Words Tasks
For English all-words WSD, we compare our approach with three categories of prior approaches. Firstly, we compare our approach with the supervised SVM classifier approach, namely IMS BIBREF5. We compare our approach with both the original IMS without word embedding features and IMS with non-contextualized word embedding features, that is, word2vec with exponential decay BIBREF7. We also compare with SupWSD BIBREF27, which is an alternative implementation of IMS. Secondly, we compare our approach with the neural WSD approaches that leverage bidirectional LSTM (bi-LSTM). These include the bi-LSTM model with attention trained jointly with lexical semantic labeling task BIBREF9 (BiLSTMatt+LEX) and the bi-LSTM model enhanced with gloss representation from WordNet (GAS). Thirdly, we show comparison with prior contextualized word representations for WSD, pre-trained on a large number of texts, namely context2vec BIBREF10 and ELMo BIBREF12. In these two models, WSD is treated as nearest neighbor matching as described in Section SECREF4.
Table shows our WSD results in F1 measure. It is shown in the table that with the nearest neighbor matching model, BERT outperforms context2vec and ELMo. This shows the effectiveness of BERT's pre-trained contextualized word representation. When we include surrounding sentences, one to the left and one to the right, we get improved F1 scores consistently.
We also show that linear projection to the sense output vector further improves WSD performance, by which our best results are achieved. While BERT has been shown to outperform other pre-trained contextualized word representations through the nearest neighbor matching experiments, its potential can be maximized through linear projection to the sense output vector. It is worthwhile to note that our more advanced linear projection, by means of layer weighting (§SECREF12 and gated linear unit (§SECREF14) gives the best F1 scores on all test sets.
All our BERT WSD systems outperform gloss-enhanced neural WSD, which has the best overall score among all prior systems.
Results ::: English Lexical Sample Tasks
For English lexical sample tasks, we compare our approach with the original IMS BIBREF5 and IMS with non-contextualized word embedding features. The embedding features incorporated into IMS include CW embeddings BIBREF28, obtained from a convolutional language model, fine-tuned (adapted) to WSD BIBREF6 (+adapted CW), and word2vec skip-gram BIBREF29 with exponential decay BIBREF7 (+w2v+expdecay). We also compare our approach with the bi-LSTM, on top of which sense classification is defined BIBREF8, and context2vec BIBREF10, which is a contextualized pre-trained bi-LSTM model trained on 2B words of text. Finally, we also compare with a prior multi-task and semi-supervised WSD approach learned through alternating structure optimization (ASO) BIBREF3, which also utilizes unlabeled data for training.
As shown in Table , our BERT-based WSD approach with linear projection model outperforms all prior approaches. context2vec, which is pre-trained on a large amount of texts, performs worse than the prior semi-supervised ASO approach on Senseval-3, while our best result outperforms ASO by a large margin.
Neural bi-LSTM performs worse than IMS with non-contextualized word embedding features. Our neural model with pre-trained contextualized word representations outperforms the best result achieved by IMS on both Senseval-2 and Senseval-3.
Results ::: Chinese OntoNotes WSD
We compare our approach with IMS without and with word embedding features as the baselines. The results are shown in Table .
Across all genres, BERT outperforms the baseline IMS with word embedding (non-contextualized word representation) features BIBREF6. The latter also improves over the original IMS without word embedding features BIBREF5. Linear projection approaches consistently outperform nearest neighbor matching by a significant margin, similar to the results on English WSD tasks.
The best overall result for the Chinese OntoNotes test set is achieved by the models with simple projection and hidden layer weighting.
Discussion
Across all tasks (English all-words, English lexical sample, and Chinese OntoNotes), our experiments demonstrate the effectiveness of BERT over various prior WSD approaches. The best results are consistently obtained by linear projection models, which project the last hidden layer or the weighted sum of all hidden layers to an output sense vector.
We can view the BERT hidden layer outputs as contextual features, which serve as useful cues in determining the word senses. In fact, the attention mechanism in transformer captures the surrounding words. In prior work like IMS BIBREF5, these contextual cues are captured by the manually-defined surrounding word and collocation features. The features obtained by the hidden vector output are shown to be more effective than the manually-defined features.
We introduced two advanced linear projection techniques, namely layer weighting and gated linear unit (GLU). While BIBREF12 showed that the second biLSTM layer results in better WSD accuracy compared to the first layer (nearer to the individual word representation), we showed that taking into account different layers by means of the attention mechanism is useful for WSD. GLU as an activation function has been shown to be effective for better convergence and to overcome the vanishing gradient problem in the convolutional language model BIBREF22. In addition, the GLU gate vector, with values ranging from 0 to 1, can be seen as a filter for the features from the hidden layer vector.
Compared with two prior contextualized word representations models, context2vec BIBREF10 and ELMo BIBREF12, BERT achieves higher accuracy. This shows the effectiveness of the attention mechanism used in the transformer model to represent the context.
Our BERT WSD models outperform prior neural WSD models by a large margin. These prior neural WSD models perform comparably with IMS with embeddings as classifier features, in addition to the discrete features. While neural WSD approaches BIBREF8, BIBREF9, BIBREF17 exploit non-contextualized word embeddings which are trained on large texts, the hidden layers are trained only on a small amount of labeled data.
Conclusion
For the WSD task, we have proposed novel strategies of incorporating BERT, a pre-trained contextualized word representation which effectively captures the context in its hidden vectors. Our experiments show that linear projection of the hidden vectors, coupled with gating to filter the values, gives better results than the prior state of the art. Compared to prior neural and feature-based WSD approaches that make use of non-contextualized word representations, using pre-trained contextualized word representation with our proposed incorporation strategy achieves significantly higher scores. | WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese |
f103789b85b00ec973076652c639bd31c605381e | f103789b85b00ec973076652c639bd31c605381e_0 | Q: What datasets are used for testing?
Text: Introduction
Word sense disambiguation (WSD) automatically assigns a pre-defined sense to a word in a text. Different senses of a word reflect different meanings a word has in different contexts. Identifying the correct word sense given a context is crucial in natural language processing (NLP). Unfortunately, while it is easy for a human to infer the correct sense of a word given a context, it is a challenge for NLP systems. As such, WSD is an important task and it has been shown that WSD helps downstream NLP tasks, such as machine translation BIBREF0 and information retrieval BIBREF1.
A WSD system assigns a sense to a word by taking into account its context, comprising the other words in the sentence. This can be done through discrete word features, which typically involve surrounding words and collocations trained using a classifier BIBREF2, BIBREF3, BIBREF4, BIBREF5. The classifier can also make use of continuous word representations of the surrounding words BIBREF6, BIBREF7. Neural WSD systems BIBREF8, BIBREF9 feed the continuous word representations into a neural network that captures the whole sentence and the word representation in the sentence. However, in both approaches, the word representations are independent of the context.
Recently, pre-trained contextualized word representations BIBREF10, BIBREF11, BIBREF12, BIBREF13 have been shown to improve downstream NLP tasks. Pre-trained contextualized word representations are obtained through neural sentence encoders trained on a huge amount of raw texts. When the resulting sentence encoder is fine-tuned on the downstream task, such as question answering, named entity recognition, and sentiment analysis, with much smaller annotated training data, it has been shown that the trained model, with the pre-trained sentence encoder component, achieves new state-of-the-art results on those tasks.
While demonstrating superior performance in downstream NLP tasks, pre-trained contextualized word representations are still reported to give lower accuracy compared to approaches that use non-contextualized word representations BIBREF10, BIBREF12 when evaluated on WSD. This seems counter-intuitive, as a neural sentence encoder better captures the surrounding context that serves as an important cue to disambiguate words. In this paper, we explore different strategies of integrating pre-trained contextualized word representations for WSD. Our best strategy outperforms prior methods of incorporating pre-trained contextualized word representations and achieves new state-of-the-art accuracy on multiple benchmark WSD datasets.
The following sections are organized as follows. Section SECREF2 presents related work. Section SECREF3 describes our pre-trained contextualized word representation. Section SECREF4 proposes different strategies to incorporate the contextualized word representation for WSD. Section SECREF5 describes our experimental setup. Section SECREF6 presents the experimental results. Section SECREF7 discusses the findings from the experiments. Finally, Section SECREF8 presents the conclusion.
Related Work
Continuous word representations in real-valued vectors, or commonly known as word embeddings, have been shown to help improve NLP performance. Initially, exploiting continuous representations was achieved by adding real-valued vectors as classification features BIBREF14. BIBREF6 fine-tuned non-contextualized word embeddings by a feed-forward neural network such that those word embeddings were more suited for WSD. The fine-tuned embeddings were incorporated into an SVM classifier. BIBREF7 explored different strategies of incorporating word embeddings and found that their best strategy involved exponential decay that decreased the contribution of surrounding word features as their distances to the target word increased.
The neural sequence tagging approach has also been explored for WSD. BIBREF8 proposed bidirectional long short-term memory (LSTM) BIBREF15 for WSD. They concatenated the hidden states of the forward and backward LSTMs and fed the concatenation into an affine transformation followed by softmax normalization, similar to the approach to incorporate a bidirectional LSTM adopted in sequence labeling tasks such as part-of-speech tagging and named entity recognition BIBREF16. BIBREF9 proposed a self-attention layer on top of the concatenated bidirectional LSTM hidden states for WSD and introduced multi-task learning with part-of-speech tagging and semantic labeling as auxiliary tasks. However, on average across the test sets, their approach did not outperform SVM with word embedding features. Subsequently, BIBREF17 proposed the incorporation of glosses from WordNet in a bidirectional LSTM for WSD, and reported better results than both SVM and prior bidirectional LSTM models.
A neural language model (LM) is aimed at predicting a word given its surrounding context. As such, the resulting hidden representation vector captures the context of a word in a sentence. BIBREF10 designed context2vec, which is a one-layer bidirectional LSTM trained to maximize the similarity between the hidden state representation of the LSTM and the target word embedding. BIBREF12 designed ELMo, which is a two-layer bidirectional LSTM language model trained to predict the next word in the forward LSTM and the previous word in the backward LSTM. In both models, WSD was evaluated by nearest neighbor matching between the test and training instance representations. However, despite training on a huge amount of raw texts, the resulting accuracies were still lower than those achieved by WSD approaches with pre-trained non-contextualized word representations.
End-to-end neural machine translation (NMT) BIBREF18, BIBREF19 learns to generate an output sequence given an input sequence, using an encoder-decoder model. The encoder captures the contextualized representation of the words in the input sentence for the decoder to generate the output sentence. Following this intuition, BIBREF11 trained an encoder-decoder model on parallel texts and obtained pre-trained contextualized word representations from the encoder.
Pre-Trained Contextualized Word Representation
The contextualized word representation that we use is BERT BIBREF13, which is a bidirectional transformer encoder model BIBREF20 pre-trained on billions of words of texts. There are two tasks on which the model is trained, i.e., masked word and next sentence prediction. In both tasks, prediction accuracy is determined by the ability of the model to understand the context.
A transformer encoder computes the representation of each word through an attention mechanism with respect to the surrounding words. Given a sentence $x^n_1$ of length $n$, the transformer computes the representation of each word $x_i$ through a multi-head attention mechanism, where the query vector is from $x_i$ and the key-value vector pairs are from the surrounding words $x_{i^{\prime }}$ ($1 \le i^{\prime } \le n$). The word representation produced by the transformer captures the contextual information of a word.
The attention mechanism can be viewed as mapping a query vector $\mathbf {q}$ and a set of key-value vector pairs $(\mathbf {k}, \mathbf {v})$ to an output vector. The attention function $A(\cdot )$ computes the output vector which is the weighted sum of the value vectors and is defined as:
where $\mathbf {K}$ and $\mathbf {V}$ are matrices, containing the key vectors and the value vectors of the words in the sentence respectively, and $\alpha (\mathbf {q}, \mathbf {k}, \rho )$ is a scalar attention weight between $\mathbf {q}$ and $\mathbf {k}$, re-scaled by a scalar $\rho $.
Two building blocks for the transformer encoder are the multi-head attention mechanism and the position-wise feed-forward neural network (FFNN). The multi-head attention mechanism with $H$ heads leverages the attention function in Equation DISPLAY_FORM1 as follows:
where $\oplus $ denotes concatenation of vectors, $\mathbf {W}_\text{MH} \in \mathbb {R}^{d_\text{model} \times Hd_\mathbf {v}}$, $\mathbf {W}^\mathbf {Q}_\eta , \mathbf {W}^\mathbf {K}_\eta \in \mathbb {R}^{d_\mathbf {k} \times d_\text{model}}$, and $ \mathbf {W}^\mathbf {V}_\eta \in \mathbb {R}^{d_\mathbf {v} \times d_\text{model}}$. The input vector $\mathbf {q} \in \mathbb {R}^{d_\text{model}}$ is the hidden vector for the ambiguous word, while input matrices $\mathbf {K}, \mathbf {V} \in \mathbb {R}^{d_\text{model} \times n}$ are the concatenation of the hidden vectors of all words in the sentence. For each attention head, the dimension of both the query and key vectors is $d_\mathbf {k}$ while the dimension of the value vector is $d_\mathbf {v}$. The encoder model dimension is $d_\text{model}$.
The position-wise FFNN performs a non-linear transformation on the attention output corresponding to each input word position as follows:
in which the input vector $\mathbf {u} \in \mathbb {R}^{d_\text{model}}$ is transformed to the output vector with dimension $d_\text{model}$ via a series of linear projections with the ReLU activation function.
For the hidden layer $l$ ($1 \le l \le L$), the self-attention sub-layer output $\mathbf {f}^l_i$ is computed as follows:
where LayerNorm refers to layer normalization BIBREF21 and the superscript $l$ and subscript $\mathbf {h}$ indicate that each encoder layer $l$ has an independent set of multi-head attention weight parameters (see Equations DISPLAY_FORM2 and ). The input for the first layer is $\mathbf {h}^0_i = \mathbf {E}(x_i)$, which is the non-contextualized word embedding of $x_i$.
The second sub-layer consists of the position-wise fully connected FFNN, computed as:
where, similar to self-attention, an independent set of weight parameters in Equation DISPLAY_FORM3 is defined in each layer.
Incorporating Pre-Trained Contextualized Word Representation
As BERT is trained on the masked word prediction task, which is to predict a word given the surrounding (left and right) context, the pre-trained model already captures the context. In this section, we describe different techniques of leveraging BERT for WSD, broadly categorized into nearest neighbor matching and linear projection of hidden layers.
Incorporating Pre-Trained Contextualized Word Representation ::: Nearest Neighbor Matching
A straightforward way to disambiguate word sense is through 1-nearest neighbor matching. We compute the contextualized representation of each word in the training data and the test data through BERT. Given a hidden representation $\mathbf {h}^L_{i}$ at the $L$-th layer for word $x_i$ in the test data, nearest neighbor matching finds a vector $\mathbf {h^*}$ in the $L$-th layer from the training data such that
where the sense assigned to $x_i$ is the sense of the word whose contextualized representation is $\mathbf {h^*}$. This WSD technique is adopted in earlier work on contextualized word representations BIBREF10, BIBREF12.
Incorporating Pre-Trained Contextualized Word Representation ::: Linear Projection of Hidden Layers
Apart from nearest neighbor matching, we can perform a linear projection of the hidden vector $\mathbf {h}_i$ by an affine transformation into an output sense vector, with its dimension equal to the number of senses for word $x_i$. The output of this affine transformation is normalized by softmax such that all its values sum to 1. Therefore, the predicted sense $\mathbf {s}_i$ of word $x_i$ is formulated as
where $\mathbf {s}_i$ is a vector of predicted sense distribution for word $x_i$, while $\mathbf {W}^{\text{lexelt}(x_i)}$ and $\mathbf {b}^{\text{lexelt}(x_i)}$ are respectively the projection matrix and bias vector specific to the lexical element (lexelt) of word $x_i$, which consists of its lemma and optionally its part-of-speech tag. We choose the sense corresponding to the element of $\mathbf {s}_i$ with the maximum value.
Training the linear projection model is done by the back-propagation algorithm, which updates the model parameters to minimize a cost function. Our cost function is the negative log-likelihood of the softmax output value that corresponds to the tagged sense in the training data. In addition, we propose two novel ways of incorporating BERT's hidden representation vectors to compute the sense output vector, which are described in the following sub-subsections.
Incorporating Pre-Trained Contextualized Word Representation ::: Linear Projection of Hidden Layers ::: Last Layer Projection
Similar to the nearest neighbor matching model, we can feed the hidden vector of BERT in the last layer, $\mathbf {h}^L_i$, into an affine transformation followed by softmax. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is instantiated by $\mathbf {h}^L_i$. The last layer projection model is illustrated in Figure FIGREF7(a).
Incorporating Pre-Trained Contextualized Word Representation ::: Linear Projection of Hidden Layers ::: Weighted Sum of Hidden Layers
BERT consists of multiple layers stacked one after another. Each layer carries a different representation of word context. Taking into account different hidden layers may help the WSD system to learn from different context information encoded in different layers of BERT.
To take into account all layers, we compute the weighted sum of all hidden layers, $\mathbf {h}^l_i$, where $1 \le l \le L$, corresponding to a word position $i$, through attention mechanism. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is replaced by the following equation:
where $\mathbf {m} \in \mathbb {R}^{d_\text{model}}$ is a projection vector to obtain scalar values with the key vectors. The model with weighted sum of all hidden layers is illustrated in Figure FIGREF7(b).
Incorporating Pre-Trained Contextualized Word Representation ::: Linear Projection of Hidden Layers ::: Gated Linear Unit
While the contextualized representations in the BERT hidden layer vectors are features that determine the word sense, some features are more useful than the others. As such, we propose filtering the vector values by a gating vector whose values range from 0 to 1. This mechanism is known as the gated linear unit (GLU) BIBREF22, which is formulated as
where $\mathbf {W}^\mathbf {h}$ and $\mathbf {W}^\mathbf {g}$ are separate projection matrices and $\mathbf {b}^\mathbf {h}$ and $\mathbf {b}^\mathbf {g}$ are separate bias vectors. The symbols $\sigma (\cdot )$ and $\odot $ denote the sigmoid function and element-wise vector multiplication operation respectively.
GLU transforms the input vector $\mathbf {h}$ by feeding it to two separate affine transformations. The second transformation is used as the sigmoid gate to filter the input vector, which is summed with the vector after the first affine transformation.
Experimental Setup
We conduct experiments on various WSD tasks. The description and the statistics for each task are given in Table . For English, a lexical element (lexelt) is defined as a combination of lemma and part-of-speech tag, while for Chinese, it is simply the lemma, following the OntoNotes setup.
We exploit English BERT$_\text{BASE}$ for the English tasks and Chinese BERT for the Chinese task. We conduct experiments with different strategies of incorporating BERT as described in Section SECREF4, namely 1-nearest neighbor matching (1-nn) and linear projection. In the latter technique, we explore strategies including simple last layer projection, layer weighting (LW), and gated linear unit (GLU).
In the linear projection model, we train the model by the Adam algorithm BIBREF23 with a learning rate of $10^{-3}$. The model parameters are updated per mini-batch of 16 sentences. As update progresses, we pick the best model parameters from a series of neural network updates based on accuracy on a held-out development set, disjoint from the training set.
The state-of-the-art supervised WSD approach takes into account features from the neighboring sentences, typically one sentence to the left and one to the right apart from the current sentence that contains the ambiguous words. We also exploit this in our model, as BERT supports inputs with multiple sentences separated by a special [SEP] symbol.
For English all-words WSD, we train our WSD model on SemCor BIBREF24, and test it on Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15). This common benchmark, which has been annotated with WordNet-3.0 senses BIBREF25, has recently been adopted in English all-words WSD. Following BIBREF9, we choose SemEval 2007 Task 17 (SE07) as our development data to pick the best model parameters after a number of neural network updates, for models that require back-propagation training.
We also evaluate on Senseval-2 and Senseval-3 English lexical sample tasks, which come with pre-defined training and test data. For each word type, we pick 20% of the training instances to be our development set and keep the remaining 80% as the actual training data. Through this development set, we determine the number of epochs to use in training. We then re-train the model with the whole training dataset using the number of epochs identified in the initial training step.
While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language. We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese. We follow the data setup of BIBREF26 and conduct an evaluation on four genres, i.e., broadcast conversation (BC), broadcast news (BN), magazine (MZ), and newswire (NW), as well as the concatenation of all genres. While the training and development datasets are divided into genres, we train on the concatenation of all genres and test on each individual genre.
For Chinese WSD evaluation, we train IMS BIBREF5 on the Chinese OntoNotes dataset as our baseline. We also incorporate pre-trained non-contextualized Chinese word embeddings as IMS features BIBREF6, BIBREF7. The pre-trained word embeddings are obtained by training the word2vec skip-gram model on Chinese Gigaword Fifth Edition, which after automatic word segmentation contains approximately 2 billion words. Following BIBREF6, we incorporate the embedding features of words within a window surrounding the target ambiguous word. In our experiments, we take into account 5 words to the left and right.
Results
We present our experimental results and compare them with prior baselines.
Results ::: English All-Words Tasks
For English all-words WSD, we compare our approach with three categories of prior approaches. Firstly, we compare our approach with the supervised SVM classifier approach, namely IMS BIBREF5. We compare our approach with both the original IMS without word embedding features and IMS with non-contextualized word embedding features, that is, word2vec with exponential decay BIBREF7. We also compare with SupWSD BIBREF27, which is an alternative implementation of IMS. Secondly, we compare our approach with the neural WSD approaches that leverage bidirectional LSTM (bi-LSTM). These include the bi-LSTM model with attention trained jointly with lexical semantic labeling task BIBREF9 (BiLSTMatt+LEX) and the bi-LSTM model enhanced with gloss representation from WordNet (GAS). Thirdly, we show comparison with prior contextualized word representations for WSD, pre-trained on a large number of texts, namely context2vec BIBREF10 and ELMo BIBREF12. In these two models, WSD is treated as nearest neighbor matching as described in Section SECREF4.
Table shows our WSD results in F1 measure. It is shown in the table that with the nearest neighbor matching model, BERT outperforms context2vec and ELMo. This shows the effectiveness of BERT's pre-trained contextualized word representation. When we include surrounding sentences, one to the left and one to the right, we get improved F1 scores consistently.
We also show that linear projection to the sense output vector further improves WSD performance, by which our best results are achieved. While BERT has been shown to outperform other pre-trained contextualized word representations through the nearest neighbor matching experiments, its potential can be maximized through linear projection to the sense output vector. It is worthwhile to note that our more advanced linear projection, by means of layer weighting (§SECREF12 and gated linear unit (§SECREF14) gives the best F1 scores on all test sets.
All our BERT WSD systems outperform gloss-enhanced neural WSD, which has the best overall score among all prior systems.
Results ::: English Lexical Sample Tasks
For English lexical sample tasks, we compare our approach with the original IMS BIBREF5 and IMS with non-contextualized word embedding features. The embedding features incorporated into IMS include CW embeddings BIBREF28, obtained from a convolutional language model, fine-tuned (adapted) to WSD BIBREF6 (+adapted CW), and word2vec skip-gram BIBREF29 with exponential decay BIBREF7 (+w2v+expdecay). We also compare our approach with the bi-LSTM, on top of which sense classification is defined BIBREF8, and context2vec BIBREF10, which is a contextualized pre-trained bi-LSTM model trained on 2B words of text. Finally, we also compare with a prior multi-task and semi-supervised WSD approach learned through alternating structure optimization (ASO) BIBREF3, which also utilizes unlabeled data for training.
As shown in Table , our BERT-based WSD approach with linear projection model outperforms all prior approaches. context2vec, which is pre-trained on a large amount of texts, performs worse than the prior semi-supervised ASO approach on Senseval-3, while our best result outperforms ASO by a large margin.
Neural bi-LSTM performs worse than IMS with non-contextualized word embedding features. Our neural model with pre-trained contextualized word representations outperforms the best result achieved by IMS on both Senseval-2 and Senseval-3.
Results ::: Chinese OntoNotes WSD
We compare our approach with IMS without and with word embedding features as the baselines. The results are shown in Table .
Across all genres, BERT outperforms the baseline IMS with word embedding (non-contextualized word representation) features BIBREF6. The latter also improves over the original IMS without word embedding features BIBREF5. Linear projection approaches consistently outperform nearest neighbor matching by a significant margin, similar to the results on English WSD tasks.
The best overall result for the Chinese OntoNotes test set is achieved by the models with simple projection and hidden layer weighting.
Discussion
Across all tasks (English all-words, English lexical sample, and Chinese OntoNotes), our experiments demonstrate the effectiveness of BERT over various prior WSD approaches. The best results are consistently obtained by linear projection models, which project the last hidden layer or the weighted sum of all hidden layers to an output sense vector.
We can view the BERT hidden layer outputs as contextual features, which serve as useful cues in determining the word senses. In fact, the attention mechanism in transformer captures the surrounding words. In prior work like IMS BIBREF5, these contextual cues are captured by the manually-defined surrounding word and collocation features. The features obtained by the hidden vector output are shown to be more effective than the manually-defined features.
We introduced two advanced linear projection techniques, namely layer weighting and gated linear unit (GLU). While BIBREF12 showed that the second biLSTM layer results in better WSD accuracy compared to the first layer (nearer to the individual word representation), we showed that taking into account different layers by means of the attention mechanism is useful for WSD. GLU as an activation function has been shown to be effective for better convergence and to overcome the vanishing gradient problem in the convolutional language model BIBREF22. In addition, the GLU gate vector, with values ranging from 0 to 1, can be seen as a filter for the features from the hidden layer vector.
Compared with two prior contextualized word representations models, context2vec BIBREF10 and ELMo BIBREF12, BERT achieves higher accuracy. This shows the effectiveness of the attention mechanism used in the transformer model to represent the context.
Our BERT WSD models outperform prior neural WSD models by a large margin. These prior neural WSD models perform comparably with IMS with embeddings as classifier features, in addition to the discrete features. While neural WSD approaches BIBREF8, BIBREF9, BIBREF17 exploit non-contextualized word embeddings which are trained on large texts, the hidden layers are trained only on a small amount of labeled data.
Conclusion
For the WSD task, we have proposed novel strategies of incorporating BERT, a pre-trained contextualized word representation which effectively captures the context in its hidden vectors. Our experiments show that linear projection of the hidden vectors, coupled with gating to filter the values, gives better results than the prior state of the art. Compared to prior neural and feature-based WSD approaches that make use of non-contextualized word representations, using pre-trained contextualized word representation with our proposed incorporation strategy achieves significantly higher scores. | Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15), OntoNotes Release 5.0 |
9c4a4dfa7b0b977173e76e2d2f08fa984af86f0e | 9c4a4dfa7b0b977173e76e2d2f08fa984af86f0e_0 | Q: How does TP-N2F compare to LSTM-based Seq2Seq in terms of training and inference speed?
Text: INTRODUCTION
When people perform explicit reasoning, they can typically describe the way to the conclusion step by step via relational descriptions. There is ample evidence that relational representations are important for human cognition (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4). Although a rapidly growing number of researchers use deep learning to solve complex symbolic reasoning and language tasks (a recent review is BIBREF5), most existing deep learning models, including sequence models such as LSTMs, do not explicitly capture human-like relational structure information.
In this paper we propose a novel neural architecture, TP-N2F, to solve natural- to formal-language generation tasks (N2F). In the tasks we study, math or programming problems are stated in natural-language, and answers are given as programs, sequences of relational representations, to solve the problem. TP-N2F encodes the natural-language symbolic structure of the problem in an input vector space, maps this to a vector in an intermediate space, and uses that vector to produce a sequence of output vectors that are decoded as relational structures. Both input and output structures are modelled as Tensor Product Representations (TPRs) BIBREF6. During encoding, NL-input symbolic structures are encoded as vector space embeddings using TPR `binding' (following BIBREF7); during decoding, symbolic constituents are extracted from structure-embedding output vectors using TPR `unbinding' (following BIBREF8, BIBREF9).
Our contributions in this work are as follows. (i) We propose a role-level analysis of N2F tasks. (ii) We present a new TP-N2F model which gives a neural-network-level implementation of a model solving the N2F task under the role-level description proposed in (i). To our knowledge, this is the first model to be proposed which combines both the binding and unbinding operations of TPRs to achieve generation tasks through deep learning. (iii) State-of-the-art performance on two recently developed N2F tasks shows that the TP-N2F model has significant structure learning ability on tasks requiring symbolic reasoning through program synthesis.
Background: Review of Tensor-Product Representation
The TPR mechanism is a method to create a vector space embedding of complex symbolic structures. The type of a symbol structure is defined by a set of structural positions or roles, such as the left-child-of-root position in a tree, or the second-argument-of-$R$ position of a given relation $R$. In a particular instance of a structural type, each of these roles may be occupied by a particular filler, which can be an atomic symbol or a substructure (e.g., the entire left sub-tree of a binary tree can serve as the filler of the role left-child-of-root). For now, we assume the fillers to be atomic symbols.
The TPR embedding of a symbol structure is the sum of the embeddings of all its constituents, each constituent comprising a role together with its filler. The embedding of a constituent is constructed from the embedding of a role and the embedding of the filler of that role: these are joined together by the TPR `binding' operation, the tensor (or generalized outer) product $\otimes $.
Formally, suppose a symbolic type is defined by the roles $\lbrace r_i \rbrace $, and suppose that in a particular instance of that type, ${S}$, role $r_i$ is bound by filler $f_i$. The TPR embedding of ${S}$ is the order-2 tensor = i i i = i i i where $\lbrace _i \rbrace $ are vector embeddings of the fillers and $\lbrace _i \rbrace $ are vector embeddings of the roles. In Eq. SECREF2, and below, for notational simplicity we conflate order-2 tensors and matrices.
As a simple example, consider the symbolic type string, and choose roles to be $r_1 = $ first_element, $r_2 = $ second_element, etc. Then in the specific string S = cba, the first role $r_1$ is filled by c, and $r_2$ and $r_3$ by b and a, respectively. The TPR for S is $\otimes _1 + \otimes _2 + \otimes _3$, where $, , $ are the vector embeddings of the symbols a, b, c, and $_i$ is the vector embedding of role $r_i$.
A TPR scheme for embedding a set of symbol structures is defined by a decomposition of those structures into roles bound to fillers, an embedding of each role as a role vector, and an embedding of each filler as a filler vector. Let the total number of roles and fillers available be $n_{\mathrm {R}}, n_{\mathrm {F}}$, respectively. Define the matrix of all possible role vectors to be $\in ^{d_{\mathrm {R}}\times n_{\mathrm {R}}}$, with column $i$, $[]_{:i} = _i \in ^{d_{\mathrm {R}}}$, comprising the embedding of $r_i$. Similarly let $\in ^{d_{\mathrm {F}}\times n_{\mathrm {F}}}$ be the matrix of all possible filler vectors. The TPR $\in ^{d_{\mathrm {F}}\times d_{\mathrm {R}}}$. Below, $d_{\mathrm {R}}, n_{\mathrm {R}}, d_{\mathrm {F}}, n_{\mathrm {F}}$ will be hyper-parameters, while $, $ will be learned parameter matrices.
Using summation in Eq.SECREF2 to combine the vectors embedding the constituents of a structure risks non-recoverability of those constituents given the embedding $$ of the the structure as a whole. The tensor product is chosen as the binding operation in order to enable recovery of the filler of any role in a structure ${S}$ given its TPR $$. This can be done with perfect precision if the embeddings of the roles are linearly independent. In that case the role matrix $$ has a left inverse $$: $= $. Now define the unbinding (or dual) vector for role $r_j$, $_j$, to be the $j^{{\mathrm {th}}}$ column of $^\top $: $U_{:j}^\top $. Then, since $[]_{ji} = []_{ji} = _{j:} _{:i} = [^\top _{:j}]^\top _{:i} =_j^\top _i = _i^\top _j$, we have $_i^\top _j = \delta _{ji}$. This means that, to recover the filler of $r_j$ in the structure with TPR $$, we can take its tensor inner product (or matrix-vector product) with $_j$: j = [ i i i] j = i i ij = j
In the architecture proposed here, we will make use of both TPR binding using the tensor product with role vectors $_i$ and TPR unbinding using the tensor inner product with unbinding vectors $_j$. Binding will be used to produce the order-2 tensor $_S$ embedding of the NL problem statement. Unbinding will be used to generate output relational tuples from an order-3 tensor $$. Because they pertain to different representations (of different orders in fact), the binding and unbinding vectors we will use are not related to one another.
TP-N2F Model
We propose a general TP-N2F neural network architecture operating over TPRs to solve N2F tasks under a proposed role-level description of those tasks. In this description, natural-language input is represented as a straightforward order-2 role structure, and formal-language relational representations of outputs are represented with a new order-3 recursive role structure proposed here. Figure FIGREF3 shows an overview diagram of the TP-N2F model. It depicts the following high-level description.
As shown in Figure FIGREF3, while the natural-language input is a sequence of words, the output is a sequence of multi-argument relational tuples such as $(R \hspace{2.84526pt}A_1 \hspace{2.84526pt}A_2)$, a 3-tuple consisting of a binary relation (or operation) $R$ with its two arguments. The “TP-N2F encoder” uses two LSTMs to produce a pair consisting of a filler vector and a role vector, which are bound together with the tensor product. These tensor products, concatenated, comprise the “context” over which attention will operate in the decoder. The sum of the word-level TPRs, flattened to a vector, is treated as a representation of the entire problem statement; it is fed to the “Reasoning MLP”, which transforms this encoding of the problem into a vector encoding the solution. This is the initial state of the “TP-N2F decoder” attentional LSTM, which outputs at each time step an order-3 tensor representing a relational tuple. To generate a correct tuple from decoder operations, the model must learn to give the order-3 tensor the form of a TPR for a $(R \hspace{2.84526pt}A_1 \hspace{2.84526pt}A_2)$ tuple (detailed explanation in Sec. SECREF7). In the following sections, we first introduce the details of our proposed role-level description for N2F tasks, and then present how our proposed TP-N2F model uses TPR binding and unbinding operations to create a neural network implementation of this description of N2F tasks.
TP-N2F Model ::: Role-level description of N2F tasks
In this section, we propose a role-level description of N2F tasks, which specifies the filler/role structures of the input natural-language symbolic expressions and the output relational representations.
TP-N2F Model ::: Role-level description of N2F tasks ::: Role-level description for natural-language input
Instead of encoding each token of a sentence with a non-compositional embedding vector looked up in a learned dictionary, we use a learned role-filler decomposition to compose a tensor representation for each token. Given a sentence $S$ with $n$ word tokens $\lbrace w^0,w^1,...,w^{n-1}\rbrace $, each word token $w^t$ is assigned a learned role vector $^t$, soft-selected from the learned dictionary $$, and a learned filler vector $^t$, soft-selected from the learned dictionary $$ (Sec. SECREF2). The mechanism closely follows that of BIBREF7, and we hypothesize similar results: the role and filler approximately encode the grammatical role of the token and its lexical semantics, respectively. Then each word token $w^t$ is represented by the tensor product of the role vector and the filler vector: $^t=^t \otimes ^t$. In addition to the set of all its token embeddings $\lbrace ^0, \ldots , ^{n-1} \rbrace $, the sentence $S$ as a whole is assigned a TPR equal to the sum of the TPR embeddings of all its word tokens: $_S = \sum _{t=0}^{n-1} ^t$.
Using TPRs to encode natural language has several advantages. First, natural language TPRs can be interpreted by exploring the distribution of tokens grouped by the role and filler vectors they are assigned by a trained model (as in BIBREF7). Second, TPRs avoid the Bag of Word (BoW) confusion BIBREF8: the BoW encoding of Jay saw Kay is the same as the BoW encoding of Kay saw Jay but the encodings are different with TPR embedding, because the role filled by a symbol changes with its context.
TP-N2F Model ::: Role-level description of N2F tasks ::: Role-level description for relational representations
In this section, we propose a novel recursive role-level description for representing symbolic relational tuples. Each relational tuple contains a relation token and multiple argument tokens. Given a binary relation $rel$, a relational tuple can be written as $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$ where $arg_1,arg_2$ indicate two arguments of relation $rel$. Let us adopt the two positional roles, $p_i^{rel} = $ arg$_i$-of-$rel$ for $i=1,2$. The filler of role $p_i^{rel}$ is $arg_i$. Now let us use role decomposition recursively, noting that the role $p_i^{rel}$ can itself be decomposed into a sub-role $p_i = $ arg$_i$-of-$\underline{\hspace{5.69054pt}}$ which has a sub-filler $rel$. Suppose that $arg_i, rel, p_i$ are embedded as vectors $_i, , _i$. Then the TPR encoding of $p_i^{rel}$ is $_{rel} \otimes _i$, so the TPR encoding of filler $arg_i$ bound to role $p_i^{rel}$ is $_i \otimes (_{rel} \otimes _i)$. The tensor product is associative, so we can omit parentheses and write the TPR for the formal-language expression, the relational tuple $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$, as: = 1 rel 1 + 2 rel 2. Given the unbinding vectors $^{\prime }_i$ for positional role vectors $_i$ and the unbinding vector $^{\prime }_{rel}$ for the vector $_{rel}$ that embeds relation $rel$, each argument can be unbound in two steps as shown in Eqs. SECREF7–SECREF7. i' = [ 1 rel 1 + 2 rel 2 ] i' = i rel
[ i rel ] 'rel = i Here $\cdot $ denotes the tensor inner product, which for the order-3 $$ and order-1 $^{\prime }_i$ in Eq. SECREF7 can be defined as $[\cdot ^{\prime }_i]_{jk} = \sum _l []_{jkl} [^{\prime }_i]_l$; in Eq. SECREF7, $\cdot $ is equivalent to the matrix-vector product.
Our proposed scheme can be contrasted with the TPR scheme in which $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$ is embedded as $_{rel} \otimes _1 \otimes _2$ (e.g., BIBREF11, BIBREF12). In that scheme, an $n$-ary-relation tuple is embedded as an order-($n+1$) tensor, and unbinding an argument requires knowing all the other arguments (to use their unbinding vectors). In the scheme proposed here, an $n$-ary-relation tuple is still embedded as an order-3 tensor: there are just $n$ terms in the sum in Eq. SECREF7, using $n$ position vectors $_1, \dots , _n$; unbinding simply requires knowing the unbinding vectors for these fixed position vectors.
In the model, the order-3 tensor $$ of Eq. SECREF7 has a different status than the order-2 tensor $_S$ of Sec. SECREF5. $_S$ is a TPR by construction, whereas $$ is a TPR as a result of successful learning. To generate the output relational tuples, the decoder assumes each tuple has the form of Eq. SECREF7, and performs the unbinding operations which that structure calls for. In Appendix Sec. SECREF65, it is shown that, if unbinding each of a set of roles from some unknown tensor $$ gives a target set of fillers, then $$ must equal the TPR generated by those role/filler pairs, plus some tensor that is irrelevant because unbinding from it produces the zero vector. In other words, if the decoder succeeds in producing filler vectors that correspond to output relational tuples that match the target, then, as far as what the decoder can see, the tensor that it operates on is the TPR of Eq. SECREF7.
TP-N2F Model ::: Role-level description of N2F tasks ::: The TP-N2F Scheme for Learning the input-output mapping
To generate formal relational tuples from natural-language descriptions, a learning strategy for the mapping between the two structures is particularly important. As shown in (SECREF8), we formalize the learning scheme as learning a mapping function $f_{\mathrm {mapping}}(\cdot )$, which, given a structural representation of the natural-language input, $_S$, outputs a tensor $_F$ from which the structural representation of the output can be generated. At the role level of description, there's nothing more to be said about this mapping; how it is modeled at the neural network level is discussed in Sec. SECREF10. F = fmapping(S)
TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation
As shown in Figure FIGREF3, the TP-N2F model is implemented with three steps: encoding, mapping, and decoding. The encoding step is implemented by the TP-N2F natural-language encoder (TP-N2F Encoder), which takes the sequence of word tokens as inputs, and encodes them via TPR binding according to the TP-N2F role scheme for natural-language input given in Sec. SECREF5. The mapping step is implemented by an MLP called the Reasoning Module, which takes the encoding produced by the TP-N2F Encoder as input. It learns to map the natural-language-structure encoding of the input to a representation that will be processed under the assumption that it follows the role scheme for output relational-tuples specified in Sec. SECREF7: the model needs to learn to produce TPRs such that this processing generates correct output programs. The decoding step is implemented by the TP-N2F relational tuples decoder (TP-N2F Decoder), which takes the output from the Reasoning Module (Sec. SECREF8) and decodes the target sequence of relational tuples via TPR unbinding. The TP-N2F Decoder utilizes an attention mechanism over the individual-word TPRs $^t$ produced by the TP-N2F Encoder. The detailed implementations are introduced below.
TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation ::: The TP-N2F natural-language Encoder
The TP-N2F encoder follows the role scheme in Sec. SECREF5 to encode each word token $w^t$ by soft-selecting one of $n_{\mathrm {F}}$ fillers and one of $n_{\mathrm {R}}$ roles. The fillers and roles are embedded as vectors. These embedding vectors, and the functions for selecting fillers and roles, are learned by two LSTMs, the Filler-LSTM and the Role-LSTM. (See Figure FIGREF11.) At each time-step $t$, the Filler-LSTM and the Role-LSTM take a learned word-token embedding $^t$ as input. The hidden state of the Filler-LSTM, $_{\mathrm {F}}^t$, is used to compute softmax scores $u_k^{\mathrm {F}}$ over $n_{\mathrm {F}}$ filler slots, and a filler vector $^{t} = ^{\mathrm {F}}$ is computed from the softmax scores (recall from Sec. SECREF2 that $$ is the learned matrix of filler vectors). Similarly, a role vector is computed from the hidden state of the Role-LSTM, $_{\mathrm {R}}^t$. $f_{\mathrm {F}}$ and $f_{\mathrm {R}}$ denote the functions that generate $^{t}$ and $^t$ from the hidden states of the two LSTMs. The token $w^t$ is encoded as $^t$, the tensor product of $^{t}$ and $^t$. $^t$ replaces the hidden vector in each LSTM and is passed to the next time step, together with the LSTM cell-state vector $^t$: see (SECREF10)–(SECREF10). After encoding the whole sequence, the TP-N2F encoder outputs the sum of all tensor products $\sum _t ^t$ to the next module. We use an MLP, called the Reasoning MLP, for TPR mapping; it takes an order-2 TPR from the encoder and maps it to the initial state of the decoder. Detailed equations and implementation are provided in Sec. SECREF22 of the Appendix. Ft = fFiller-LSTM(t,t-1, Ft-1) Rt = fRole-LSTM(t,t-1, Rt-1)
t = t t = fF(Ft) fR(Rt)
TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation ::: The TP-N2F Relational-Tuple Decoder
The TP-N2F Decoder is an RNN that takes the output from the reasoning MLP as its initial hidden state for generating a sequence of relational tuples (Figure FIGREF13). This decoder contains an attentional LSTM called the Tuple-LSTM which feeds an unbinding module: attention operates on the context vector of the encoder, consisting of all individual encoder outputs $\lbrace ^t \rbrace $. The hidden-state $$ of the Tuple-LSTM is treated as a TPR of a relational tuple and is unbound to a relation and arguments. During training, the Tuple-LSTM needs to learn a way to make $$ suitably approximate a TPR. At each time step $t$, the hidden state $^t$ of the Tuple-LSTM with attention (The version in BIBREF13) (SECREF12) is fed as input to the unbinding module, which regards $^t$ as if it were the TPR of a relational tuple with $m$ arguments possessing the role structure described in Sec. SECREF7: $^t \approx \sum _{i=1}^{m} _{i}^t \otimes _{rel}^t \otimes _i$. (In Figure FIGREF13, the assumed hypothetical form of $^t$, as well as that of $_i^t$ below, is shown in a bubble with dashed border.) To decode a binary relational tuple, the unbinding module decodes it from $^t$ using the two steps of TPR unbinding given in (SECREF7)–(SECREF7). The positional unbinding vectors $^{\prime }_{i}$ are learned during training and shared across all time steps. After the first unbinding step (SECREF7), i.e., the inner product of $^t$ with $^{\prime }_i$, we get tensors $_{i}^t$ (SECREF12). These are treated as the TPRs of two arguments $_i^t$ bound to a relation $_{rel}^t$. A relational unbinding vector $_{rel}^{\prime t}$ is computed by a linear function from the sum of the $_{i}^t$ and used to compute the inner product with each $_i^t$ to yield $_i^t$, which are treated as the embedding of argument vectors (SECREF12). Based on the TPR theory, $_{rel}^{\prime t}$ is passed to a linear function to get $_{rel}^t$ as the embedding of a relation vector. Finally, the softmax probability distribution over symbolic outputs is computed for relations and arguments separately. In generation, the most probable symbol is selected. (Detailed equations are in Appendix Sec. SECREF42) t = Atten(fTuple-LSTM(relt,arg1t,arg2t,t-1,ct-1),[0,...,n-1])
1t = t 1' 2t = t 2'
rel't = flinear(1t + 2t) 1t = 1t rel't 2t = 2t rel't
TP-N2F Model ::: Inference and The Learning Strategy of the TP-N2F Model
During inference time, natural language questions are encoded via the encoder and the Reasoning MLP maps the output of the encoder to the input of the decoder. We use greedy decoding (selecting the most likely class) to decode one relation and its arguments. The relation and argument vectors are concatenated to construct a new vector as the input for the Tuple-LSTM in the next step.
TP-N2F is trained using back-propagation BIBREF14 with the Adam optimizer BIBREF15 and teacher-forcing. At each time step, the ground-truth relational tuple is provided as the input for the next time step. As the TP-N2F decoder decodes a relational tuple at each time step, the relation token is selected only from the relation vocabulary and the argument tokens from the argument vocabulary. For an input ${\mathcal {I}}$ that generates $N$ output relational tuples, the loss is the sum of the cross entropy loss ${\mathcal {L}}$ between the true labels $L$ and predicted tokens for relations and arguments as shown in (SECREF14). LI = i=0N-1L(reli, Lreli) + i=0N-1j=12L(argji, Largji)
EXPERIMENTS
The proposed TP-N2F model is evaluated on two N2F tasks, generating operation sequences to solve math problems and generating Lisp programs. In both tasks, TP-N2F achieves state-of-the-art performance. We further analyze the behavior of the unbinding relation vectors in the proposed model. Results of each task and the analysis of the unbinding relation vectors are introduced in turn. Details of experiments and datasets are described in Sec. SECREF20 in the Appendix.
EXPERIMENTS ::: Generating operation sequences to solve math problems
Given a natural-language math problem, we need to generate a sequence of operations (operators and corresponding arguments) from a set of operators and arguments to solve the given problem. Each operation is regarded as a relational tuple by viewing the operator as relation, e.g., $(add, n1, n2)$. We test TP-N2F for this task on the MathQA dataset BIBREF16. The MathQA dataset consists of about 37k math word problems, each with a corresponding list of multi-choice options and the corresponding operation sequence. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed with the execution script from BIBREF16 to select a multi-choice answer. As there are about 30% noisy data (where the execution script returns the wrong answer when given the ground-truth program; see Sec. SECREF20 of the Appendix), we report both execution accuracy (of the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). TP-N2F is compared to a baseline provided by the seq2prog model in BIBREF16, an LSTM-based seq2seq model with attention. Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results. To verify the importance of the TP-N2F encoder and decoder, we conducted experiments to replace either the encoder with a standard LSTM (denoted LSTM2TP) or the decoder with a standard attentional LSTM (denoted TP2LSTM). We observe that both the TPR components of TP-N2F are important for achieving the observed performance gain relative to the baseline.
EXPERIMENTS ::: Generating program trees from natural-language descriptions
Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations.
EXPERIMENTS ::: Interpretation of learned structure
To interpret the structure learned by the model, we extract the trained unbinding relation vectors from the TP-N2F Decoder and reduce the dimension of vectors via Principal Component Analysis. K-means clustering results on the average vectors are presented in Figure FIGREF71 and Figure FIGREF72 (in Appendix A.6). Results show that unbinding vectors for operators or functions with similar semantics tend to be close to each other. For example, with 5 clusters in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together, and operators related to square or volume of geometry are clustered together. With 4 clusters in the AlgoLisp dataset, partial/lambda functions and sort functions are in one cluster, and string processing functions are clustered together. Note that there is no direct supervision to inform the model about the nature of the operations, and the TP-N2F decoder has induced this role structure using weak supervision signals from question/operation-sequence-answer pairs. More clustering results are presented in the Appendix A.6.
Related work
N2F tasks include many different subtasks such as symbolic reasoning or semantic parsing BIBREF19, BIBREF20, BIBREF21, BIBREF16, BIBREF17, BIBREF18. These tasks require models with strong structure-learning ability. TPR is a promising technique for encoding symbolic structural information and modeling symbolic reasoning in vector space. TPR binding has been used for encoding and exploring grammatical structural information of natural language BIBREF7, BIBREF9. TPR unbinding has also been used to generate natural language captions from images BIBREF8. Some researchers use TPRs for modeling deductive reasoning processes both on a rule-based model and deep learning models in vector space BIBREF22, BIBREF11, BIBREF12. However, none of these previous models takes advantage of combining TPR binding and TPR unbinding to learn structure representation mappings explicitly, as done in our model. Although researchers are paying increasing attention to N2F tasks, most of the proposed models either do not encode structural information explicitly or are specialized to particular tasks. Our proposed TP-N2F neural model can be applied to many tasks.
CONCLUSION AND FUTURE WORK
In this paper we propose a new scheme for neural-symbolic relational representations and a new architecture, TP-N2F, for formal-language generation from natural-language descriptions. To our knowledge, TP-N2F is the first model that combines TPR binding and TPR unbinding in the encoder-decoder fashion. TP-N2F achieves the state-of-the-art on two instances of N2F tasks, showing significant structure learning ability. The results show that both the TP-N2F encoder and the TP-N2F decoder are important for improving natural- to formal-language generation. We believe that the interpretation and symbolic structure encoding of TPRs are a promising direction for future work. We also plan to combine large-scale deep learning models such as BERT with TP-N2F to take advantage of structure learning for other generation tasks.
Appendix ::: Implementations of TP-N2F for experiments
In this section, we present details of the experiments of TP-N2F on the two datasets. We present the implementation of TP-N2F on each dataset.
The MathQA dataset consists of about 37k math word problems ((80/12/8)% training/dev/testing problems), each with a corresponding list of multi-choice options and an straight-line operation sequence program to solve the problem. An example from the dataset is presented in the Appendix A.4. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed to generate the solution for the given math problem. We use the execution script from BIBREF16 to execute the generated operation sequence and compute the multi-choice accuracy for each problem. During our experiments we observed that there are about 30% noisy examples (on which the execution script fails to get the correct answer on the ground truth program). Therefore, we report both execution accuracy (the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly).
The AlgoLisp dataset BIBREF17 is a program synthesis dataset, which has 79k/9k/10k training/dev/testing samples. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of commands from leaves to root and (as in MathQA) use the symbol $\#_i$ to indicate the result of the $i^{\mathrm {th}}$ command (generated previously by the model). A dataset sample with our parsed command sequence is presented in the Appendix A.4. AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: accuracy of passing all test cases (Acc), accuracy of passing 50% of test cases (50p-Acc), and accuracy of generating an exactly matched program (M-Acc). AlgoLisp has about 10% noise data (where the execution script fails to pass all test cases on the ground truth program), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed).
We use $d_{\mathrm {R}}, n_{\mathrm {R}}, d_{\mathrm {F}}, n_{\mathrm {F}}$ to indicate the TP-N2F encoder hyperparameters, the dimension of role vectors, the number of roles, the dimension of filler vectors and the number of fillers. $d_{Rel}, d_{Arg},d_{Pos}$ indicate the TP-N2F decoder hyper-parameters, the dimension of relation vectors, the dimension of argument vectors, and the dimension of position vectors.
In the experiment on the MathQA dataset, we use $n_{\mathrm {F}}= 150$, $n_{\mathrm {R}}= 50$, $d_{\mathrm {F}}= 30$, $d_{\mathrm {R}}= 20$, $d_{Rel} = 20$, $d_{Arg} = 10$, $d_{Pos} = 5$ and we train the model for 60 epochs with learning rate 0.00115. The reasoning module only contains one layer. As most of the math operators in this dataset are binary, we replace all operators taking three arguments with a set of binary operators based on hand-encoded rules, and for all operators taking one argument, a padding symbol is appended. For the baseline SEQ2PROG-orig, TP2LSTM and LSTM2TP, we use hidden size 100, single-direction, one-layer LSTM. For the SEQ2PROG-best, we performed a hyperparameter search on the hidden size for both encoder and decoder; the best score is reported.
In the experiment on the AlgoLisp dataset, we use $n_{\mathrm {F}}= 150$, $n_{\mathrm {R}}= 50$, $d_{\mathrm {F}}= 30$, $d_{\mathrm {R}}= 30$, $d_{Rel} = 30$, $d_{Arg} = 20$, $d_{Pos} = 5$ and we train the model for 50 epochs with learning rate 0.00115. We also use one-layer in the reasoning module like in MathQA. For this dataset, most function calls take three arguments so we simply add padding symbols for those functions with fewer than three arguments.
Appendix ::: Detailed equations of TP-N2F ::: TP-N2F encoder
Filler-LSTM in TP-N2F encoder
This is a standard LSTM, governed by the equations:
$\varphi , \tanh $ are the logistic sigmoid and tanh functions applied elementwise. $\flat $ flattens (reshapes) a matrix in $^{d_{\mathrm {F}} \times d_{\mathrm {R}}}$ into a vector in $^{d_{\mathrm {T}}}$, where $d_{\mathrm {T}} = d_{\mathrm {F}} d_{\mathrm {R}}$. $\odot $ is elementwise multiplication. The variables have the following dimensions: ft, ft, ft, ft, ft, ft, ff, fg, fi, fo, ♭(t-1) RdT
wt Rd
ff, fg, fi, fo RdT d
ff, fg, fi, fo RdT dT
Filler vector
The filler vector for input token $w^t$ is $^t$, defined through an attention vector over possible fillers, $_{\mathrm {f}}^t$:
($W_{\mathrm {f}}$ is the same as $$ of Sec. SECREF2.) The variables' dimensions are: fa RnF dT
ft RnF
f RdF nF
t RdF $T$ is the temperature factor, which is fixed at 0.1.
Role-LSTM in TP-N2F encoder
Similar to the Filler-LSTM, the Role-LSTM is also a standard LSTM, governed by the equations:
The variable dimensions are: rt, rt, rt, rt, rt, rt, rf, rg, ri, ro, ♭(t-1) RdT
wt Rd
rf, rg, ri, ro RdT d
rf, rg, ri, ro RdT dT
Role vector
The role vector for input token $w^t$ is determined analogously to its filler vector:
The dimensions are: ra RnR dT
rt RnR
r RdR nR
t RdR
Binding
The TPR for the filler/role binding for token $w^t$ is then:
where t RdR dF
Appendix ::: Detailed equations of TP-N2F ::: Structure Mapping
$^0 \in \mathbb {R}^{d_{\mathrm {H}}}$, where $d_{\mathrm {H}} = d_{\mathrm {A}}, d_{\mathrm {O}}, d_{\mathrm {P}}$ are dimension of argument vector, operator vector and position vector. $f_{\mathrm {mapping}}$ is implemented with a MLP (linear layer followed by a tanh) for mapping the $_t \in \mathbb {R}^{d_{\mathrm {T}}}$ to the initial state of decoder $^0$.
Appendix ::: Detailed equations of TP-N2F ::: TP-N2F decoder
Tuple-LSTM
The output tuples are also generated via a standard LSTM:
Here, $\gamma $ is the concatenation function. $_{Rel}^{t-1}$ is the trained embedding vector for the Relation of the input binary tuple, $_{Arg1}^{t-1}$ is the embedding vector for the first argument and $_{Arg2}^{t-1}$ is the embedding vector for the second argument. Then the input for the Tuple LSTM is the concatenation of the embedding vectors of relation and arguments, with dimension $d_{\mathrm {dec}}$. t, t, t, t, t, inputt, f, g, i, o, ♭(t-1) RdH
dt Rddec
f, g, i, o RdH ddec
f, g, i, o RdH dH
t RdH ${\mathrm {Atten}}$ is the attention mechanism used in BIBREF13, which computes the dot product between $_{\mathrm {input}}^t$ and each $_{t^{\prime }}$. Then a linear function is used on the concatenation of $_{\mathrm {input}}^t$ and the softmax scores on all dot products to generate $^t$. The following equations show the attention mechanism:
${\mathrm {score}}$ is the score function of the attention. In this paper, the score function is dot product. T RdH n
t Rn
t RdH
RdH (dT+n)
Unbinding
At each timestep $t$, the 2-step unbinding process described in Sec. SECREF7 operates first on an encoding of the triple as a whole, $$, using two unbinding vectors $_i^{\prime }$ that are learned but fixed for all tuples. This first unbinding gives an encoding of the two operator-argument bindings, $_i$. The second unbinding operates on the $_i$, using a generated unbinding vector for the operator, $_{rel}^{\prime }$, giving encodings of the arguments, $_i$. The generated unbinding vector for the operator, $^{\prime }$, and the generated encodings of the arguments, $_i$, each produce a probability distribution over symbolic operator outputs $Rel$ and symbolic argument outputs $Arg_i$; these probabilities are used in the cross-entropy loss function. For generating a single symbolic output, the most-probable symbols are selected.
The dimensions are: rel't RdO
1t, 2t RdA
'1, '2 RdP
1t, 2t RdA dO
dual RdH
rt RnO dO
at RnA dA
rt RnR
a1t, a2t RnA
Appendix ::: The tensor that is input to the decoder's Unbinding Module is a TPR
Here we show that, if learning is successful, the order-3 tensor $$ that each iteration of the decoder's Tuple LSTM feeds to the decoder's Unbinding Module (Figure FIGREF13) will be a TPR of the form assumed in Eq. SECREF7, repeated here: = j j rel j. The operations performed by the decoder are given in Eqs. SECREF7–SECREF7, and Eqs. SECREF12–SECREF12, rewritten here: i' = i
i rel' = i This is the standard TPR unbinding operation, used recursively: first with the unbinding vectors for positions, $_i^{\prime }$, then with the unbinding vector for the operator, $_{rel}^{\prime }$. It therefore suffices to analyze a single unbinding; the result can then be used recursively. This in effect reduces the problem to the order-2 case. What we will show is: given a set of unbinding vectors $\lbrace _i^{\prime } \rbrace $ which are dual to a set of role vectors $\lbrace _i \rbrace $, with $i$ ranging over some index set $I$, if $$ is an order-2 tensor such that 'i = i, i I then = i I i i + TPR + for some tensor $$ that annihilates all the unbinding vectors: 'i = 0, i I. If learning is successful, the processing in the decoder will generate the target relational tuple $(R, A_1, A_2)$ by obeying Eq. SECREF65 in the first unbinding, where we have $_i^{\prime } = _i^{\prime }, _i = _i, I = \lbrace 1, 2\rbrace $, and obeying Eq. SECREF65 in the second unbinding, where we have $_i^{\prime } = _{rel}^{\prime }, _i^{\prime } = _i$, with $I =$ the set containing only the null index.
Treat rank-2 tensors as matrices; then unbinding is simply matrix-vector multiplication. Assume the set of unbinding vectors is linearly independent (otherwise there would in general be no way to satisfy Eq. SECREF65 exactly, contrary to assumption). Then expand the set of unbinding vectors, if necessary, into a basis $\lbrace ^{\prime }_k\rbrace _{k \in K \supseteq I}$. Find the dual basis, with $_k$ dual to $^{\prime }_k$ (so that $_l^\top _j^{\prime } = \delta _{lj}$). Because $\lbrace ^{\prime }_k\rbrace _{k \in K}$ is a basis, so is $\lbrace _k\rbrace _{k \in K}$, so any matrix $$ can be expanded as $= \sum _{k \in K} _k _k^{\top }$. Since $^{\prime }_i = _i, \forall i \in I$ are the unbinding conditions (Eq. SECREF65), we must have $_i = _i, i \in I$. Let $_{{\mathrm {TPR}}} \equiv \sum _{i \in I} _i _i^{\top }$. This is the desired TPR, with fillers $_i$ bound to the role vectors $_i$ which are the duals of the unbinding vectors $_i^{\prime }$ ($i \in I$). Then we have $= _{{\mathrm {TPR}}} + $ (Eq. SECREF65) where $\equiv \sum _{j \in K, j \notin I} _j _j^{\top }$; so $_i^{\prime } = {\mathbf {0}}, i \in I$ (Eq. SECREF65). Thus, if training is successful, the model must have learned how to feed the decoder with order-3 TPRs with the structure posited in Eq. SECREF65.
The argument so far addresses the case where the unbinding vectors are linearly independent, making it possible to satisfy Eq. SECREF65 exactly. In relatively high-dimensional vector spaces, it will often happen that even when the number of unbinding vectors exceeds the dimension of their space by a factor of 2 or 3 (which applies to the TP-N2F models presented here), there is a set of role vectors $\lbrace _k \rbrace _{k \in K}$ approximately dual to $\lbrace ^{\prime }_k \rbrace _{k \in K}$, such that $_l^\top _j^{\prime } = \delta _{lj} \hspace{2.84526pt}\forall l, j \in K$ holds to a good approximation. (If the distribution of normalized unbinding vectors is approximately uniform on the unit sphere, then choosing the approximate dual vectors to equal the unbinding vectors themselves will do, since they will be nearly orthonormal BIBREF10. If the $\lbrace ^{\prime }_k \rbrace _{k \in K}$ are not normalized, we just rescale the role vectors, choosing $_k = _k^{\prime } / \Vert _k^{\prime } \Vert ^2$.) When the number of such role vectors exceeds the dimension of the embedding space, they will be overcomplete, so while it is still true that any matrix $$ can be expanded as above ($= \sum _{k \in K} _k _k^{\top }$), this expansion will no longer be unique. So while it remains true that $$ a TPR, it is no longer uniquely decomposable into filler/role pairs. The claim above does not claim uniqueness in this sense, and remains true.)
Appendix ::: Dataset samples ::: Data sample from MathQA dataset
Problem: The present polulation of a town is 3888. Population increase rate is 20%. Find the population of town after 1 year?
Options: a) 2500, b) 2100, c) 3500, d) 3600, e) 2700
Operations: multiply(n0,n1), divide(#0,const-100), add(n0,#1)
Appendix ::: Dataset samples ::: Data sample from AlgoLisp dataset
Problem: Consider an array of numbers and a number, decrements each element in the given array by the given number, what is the given array?
Program Nested List: (map a (partial1 b –))
Command-Sequence: (partial1 b –), (map a #0)
Appendix ::: Generated programs comparison
In this section, we display some generated samples from the two datasets, where the TP-N2F model generates correct programs but LSTM-Seq2Seq does not.
Question: A train running at the speed of 50 km per hour crosses a post in 4 seconds. What is the length of the train?
TP-N2F(correct):
(multiply,n0,const1000) (divide,#0,const3600) (multiply,n1,#1)
LSTM(wrong):
(multiply,n0,const0.2778) (multiply,n1,#0)
Question: 20 is subtracted from 60 percent of a number, the result is 88. Find the number?
TP-N2F(correct):
(add,n0,n2) (divide,n1,const100) (divide,#0,#1)
LSTM(wrong):
(add,n0,n2) (divide,n1,const100) (divide,#0,#1) (multiply,#2,n3) (subtract,#3,n0)
Question: The population of a village is 14300. It increases annually at the rate of 15 percent. What will be its population after 2 years?
TP-N2F(correct):
(divide,n1,const100) (add,#0,const1) (power,#1,n2) (multiply,n0,#2)
LSTM(wrong):
(multiply,const4,const100) (sqrt,#0)
Question: There are two groups of students in the sixth grade. There are 45 students in group a, and 55 students in group b. If, on a particular day, 20 percent of the students in group a forget their homework, and 40 percent of the students in group b forget their homework, then what percentage of the sixth graders forgot their homework?
TP-N2F(correct):
(add,n0,n1) (multiply,n0,n2) (multiply,n1,n3) (divide,#1,const100) (divide,#2,const100) (add,#3,#4) (divide,#5,#0) (multiply,#6,const100)
LSTM(wrong):
(multiply,n0,n1) (subtract,n0,n1) (divide,#0,#1)
Question: 1 divided by 0.05 is equal to
TP-N2F(correct):
(divide,n0,n1)
LSTM(wrong):
(divide,n0,n1) (multiply,n2,#0)
Question: Consider a number a, compute factorial of a
TP-N2F(correct):
( <=,arg1,1 ) ( -,arg1,1 ) ( self,#1 ) ( *,#2,arg1 ) ( if,#0,1,#3 ) ( lambda1,#4 ) ( invoke1,#5,a )
LSTM(wrong):
( <=,arg1,1 ) ( -,arg1,1 ) ( self,#1 ) ( *,#2,arg1 ) ( if,#0,1,#3 ) ( lambda1,#4 ) ( len,a ) ( invoke1,#5,#6 )
Question: Given an array of numbers and numbers b and c, add c to elements of the product of elements of the given array and b, what is the product of elements of the given array and b?
TP-N2F(correct):
( partial, b,* ) ( partial1,c,+ ) ( map,a,#0 ) ( map,#2,#1 )
LSTM(wrong):
( partial1,b,+ ) ( partial1,c,+ ) ( map,a,#0 ) ( map,#2,#1 )
Question: You are given an array of numbers a and numbers b , c and d , let how many times you can replace the median in a with sum of its digits before it becomes a single digit number and b be the coordinates of one end and c and d be the coordinates of another end of segment e , your task is to find the length of segment e rounded down
TP-N2F(correct):
( digits arg1 ) ( len #0 ) ( == #1 1 ) ( digits arg1 ) ( reduce #3 0 + ) ( self #4 ) ( + 1 #5 ) ( if #2 0 #6 ) ( lambda1 #7 ) ( sort a ) ( len a ) ( / #10 2 ) ( deref #9 #11 ) ( invoke1 #8 #12 ) ( - #13 c ) ( digits arg1 ) ( len #15 ) ( == #16 1 ) ( digits arg1 ) ( reduce #18 0 + ) ( self #19 ) ( + 1 #20 ) ( if #17 0 #21 ) ( lambda1 #22 ) ( sort a ) ( len a ) ( / #25 2 ) ( deref #24 #26 ) ( invoke1 #23 #27 ) ( - #28 c ) ( * #14 #29 ) ( - b d ) ( - b d ) ( * #31 #32 ) ( + #30 #33 ) ( sqrt #34 ) ( floor #35 )
LSTM(wrong): ( digits arg1 ) ( len #0 ) ( == #1 1 ) ( digits arg1 ) ( reduce #3 0 + ) ( self #4 ) ( + 1 #5 ) ( if #2 0 #6 ) ( lambda1 #7 ) ( sort a ) ( len a ) ( / #10 2 ) ( deref #9 #11 ) ( invoke1 #8 #12 c ) ( - #13 ) ( - b d ) ( - b d ) ( * #15 #16 ) ( * #14 #17 ) ( + #18 ) ( sqrt #19 ) ( floor #20 )
Question: Given numbers a , b , c and e , let d be c , reverse digits in d , let a and the number in the range from 1 to b inclusive that has the maximum value when its digits are reversed be the coordinates of one end and d and e be the coordinates of another end of segment f , find the length of segment f squared
TP-N2F(correct):
( digits c ) ( reverse #0 ) ( * arg1 10 ) ( + #2 arg2 ) ( lambda2 #3 ) ( reduce #1 0 #4 ) ( - a #5 ) ( digits c ) ( reverse #7 ) ( * arg1 10 ) ( + #9 arg2 ) ( lambda2 #10 ) ( reduce #8 0 #11 ) ( - a #12 ) ( * #6 #13 ) ( + b 1 ) ( range 0 #15 ) ( digits arg1 ) ( reverse #17 ) ( * arg1 10 ) ( + #19 arg2 ) ( lambda2 #20 ) ( reduce #18 0 #21 ) ( digits arg2 ) ( reverse #23 ) ( * arg1 10 ) ( + #25 arg2 ) ( lambda2 #26 ) ( reduce #24 0 #27 ) ( > #22 #28 ) ( if #29 arg1 arg2 ) ( lambda2 #30 ) ( reduce #16 0 #31 ) ( - #32 e ) ( + b 1 ) ( range 0 #34 ) ( digits arg1 ) ( reverse #36 ) ( * arg1 10 ) ( + #38 arg2 ) ( lambda2 #39 ) ( reduce #37 0 #40 ) ( digits arg2 ) ( reverse #42 ) ( * arg1 10 ) ( + #44 arg2 ) ( lambda2 #45 ) ( reduce #43 0 #46 ) ( > #41 #47 ) ( if #48 arg1 arg2 ) ( lambda2 #49 ) ( reduce #35 0 #50 ) ( - #51 e ) ( * #33 #52 ) ( + #14 #53 )
LSTM(wrong):
( - a d ) ( - a d ) ( * #0 #1 ) ( digits c ) ( reverse #3 ) ( * arg1 10 ) ( + #5 arg2 ) ( lambda2 #6 ) ( reduce #4 0 #7 ) ( - #8 e ) ( + b 1 ) ( range 0 #10 ) ( digits arg1 ) ( reverse #12 ) ( * arg1 10 ) ( + #14 arg2 ) ( lambda2 #15 ) ( reduce #13 0 #16 ) ( digits arg2 ) ( reverse #18 ) ( * arg1 10 ) ( + #20 arg2 ) ( lambda2 #21 ) ( reduce #19 0 #22 ) ( > #17 #23 ) ( if #24 arg1 arg2 ) ( lambda2 #25 ) ( reduce #11 0 #26 ) ( - #27 e ) ( * #9 #28 ) ( + #2 #29 )
Appendix ::: Unbinding relation vector clustering
We run K-means clustering on both datasets with $k = 3,4,5,6$ clusters and the results are displayed in Figure FIGREF71 and Figure FIGREF72. As described before, unbinding-vectors for operators or functions with similar semantics tend to be closer to each other. For example, in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together at middle, and operators related to geometry such as square or volume are clustered together at bottom left. In AlgoLisp dataset, basic arithmetic functions are clustered at middle, and string processing functions are clustered at right. | Full Testing Set accuracy: 84.02
Cleaned Testing Set accuracy: 93.48 |
4c7ac51a66c15593082e248451e8f6896e476ffb | 4c7ac51a66c15593082e248451e8f6896e476ffb_0 | Q: What is the performance proposed model achieved on AlgoList benchmark?
Text: INTRODUCTION
When people perform explicit reasoning, they can typically describe the way to the conclusion step by step via relational descriptions. There is ample evidence that relational representations are important for human cognition (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4). Although a rapidly growing number of researchers use deep learning to solve complex symbolic reasoning and language tasks (a recent review is BIBREF5), most existing deep learning models, including sequence models such as LSTMs, do not explicitly capture human-like relational structure information.
In this paper we propose a novel neural architecture, TP-N2F, to solve natural- to formal-language generation tasks (N2F). In the tasks we study, math or programming problems are stated in natural-language, and answers are given as programs, sequences of relational representations, to solve the problem. TP-N2F encodes the natural-language symbolic structure of the problem in an input vector space, maps this to a vector in an intermediate space, and uses that vector to produce a sequence of output vectors that are decoded as relational structures. Both input and output structures are modelled as Tensor Product Representations (TPRs) BIBREF6. During encoding, NL-input symbolic structures are encoded as vector space embeddings using TPR `binding' (following BIBREF7); during decoding, symbolic constituents are extracted from structure-embedding output vectors using TPR `unbinding' (following BIBREF8, BIBREF9).
Our contributions in this work are as follows. (i) We propose a role-level analysis of N2F tasks. (ii) We present a new TP-N2F model which gives a neural-network-level implementation of a model solving the N2F task under the role-level description proposed in (i). To our knowledge, this is the first model to be proposed which combines both the binding and unbinding operations of TPRs to achieve generation tasks through deep learning. (iii) State-of-the-art performance on two recently developed N2F tasks shows that the TP-N2F model has significant structure learning ability on tasks requiring symbolic reasoning through program synthesis.
Background: Review of Tensor-Product Representation
The TPR mechanism is a method to create a vector space embedding of complex symbolic structures. The type of a symbol structure is defined by a set of structural positions or roles, such as the left-child-of-root position in a tree, or the second-argument-of-$R$ position of a given relation $R$. In a particular instance of a structural type, each of these roles may be occupied by a particular filler, which can be an atomic symbol or a substructure (e.g., the entire left sub-tree of a binary tree can serve as the filler of the role left-child-of-root). For now, we assume the fillers to be atomic symbols.
The TPR embedding of a symbol structure is the sum of the embeddings of all its constituents, each constituent comprising a role together with its filler. The embedding of a constituent is constructed from the embedding of a role and the embedding of the filler of that role: these are joined together by the TPR `binding' operation, the tensor (or generalized outer) product $\otimes $.
Formally, suppose a symbolic type is defined by the roles $\lbrace r_i \rbrace $, and suppose that in a particular instance of that type, ${S}$, role $r_i$ is bound by filler $f_i$. The TPR embedding of ${S}$ is the order-2 tensor = i i i = i i i where $\lbrace _i \rbrace $ are vector embeddings of the fillers and $\lbrace _i \rbrace $ are vector embeddings of the roles. In Eq. SECREF2, and below, for notational simplicity we conflate order-2 tensors and matrices.
As a simple example, consider the symbolic type string, and choose roles to be $r_1 = $ first_element, $r_2 = $ second_element, etc. Then in the specific string S = cba, the first role $r_1$ is filled by c, and $r_2$ and $r_3$ by b and a, respectively. The TPR for S is $\otimes _1 + \otimes _2 + \otimes _3$, where $, , $ are the vector embeddings of the symbols a, b, c, and $_i$ is the vector embedding of role $r_i$.
A TPR scheme for embedding a set of symbol structures is defined by a decomposition of those structures into roles bound to fillers, an embedding of each role as a role vector, and an embedding of each filler as a filler vector. Let the total number of roles and fillers available be $n_{\mathrm {R}}, n_{\mathrm {F}}$, respectively. Define the matrix of all possible role vectors to be $\in ^{d_{\mathrm {R}}\times n_{\mathrm {R}}}$, with column $i$, $[]_{:i} = _i \in ^{d_{\mathrm {R}}}$, comprising the embedding of $r_i$. Similarly let $\in ^{d_{\mathrm {F}}\times n_{\mathrm {F}}}$ be the matrix of all possible filler vectors. The TPR $\in ^{d_{\mathrm {F}}\times d_{\mathrm {R}}}$. Below, $d_{\mathrm {R}}, n_{\mathrm {R}}, d_{\mathrm {F}}, n_{\mathrm {F}}$ will be hyper-parameters, while $, $ will be learned parameter matrices.
Using summation in Eq.SECREF2 to combine the vectors embedding the constituents of a structure risks non-recoverability of those constituents given the embedding $$ of the the structure as a whole. The tensor product is chosen as the binding operation in order to enable recovery of the filler of any role in a structure ${S}$ given its TPR $$. This can be done with perfect precision if the embeddings of the roles are linearly independent. In that case the role matrix $$ has a left inverse $$: $= $. Now define the unbinding (or dual) vector for role $r_j$, $_j$, to be the $j^{{\mathrm {th}}}$ column of $^\top $: $U_{:j}^\top $. Then, since $[]_{ji} = []_{ji} = _{j:} _{:i} = [^\top _{:j}]^\top _{:i} =_j^\top _i = _i^\top _j$, we have $_i^\top _j = \delta _{ji}$. This means that, to recover the filler of $r_j$ in the structure with TPR $$, we can take its tensor inner product (or matrix-vector product) with $_j$: j = [ i i i] j = i i ij = j
In the architecture proposed here, we will make use of both TPR binding using the tensor product with role vectors $_i$ and TPR unbinding using the tensor inner product with unbinding vectors $_j$. Binding will be used to produce the order-2 tensor $_S$ embedding of the NL problem statement. Unbinding will be used to generate output relational tuples from an order-3 tensor $$. Because they pertain to different representations (of different orders in fact), the binding and unbinding vectors we will use are not related to one another.
TP-N2F Model
We propose a general TP-N2F neural network architecture operating over TPRs to solve N2F tasks under a proposed role-level description of those tasks. In this description, natural-language input is represented as a straightforward order-2 role structure, and formal-language relational representations of outputs are represented with a new order-3 recursive role structure proposed here. Figure FIGREF3 shows an overview diagram of the TP-N2F model. It depicts the following high-level description.
As shown in Figure FIGREF3, while the natural-language input is a sequence of words, the output is a sequence of multi-argument relational tuples such as $(R \hspace{2.84526pt}A_1 \hspace{2.84526pt}A_2)$, a 3-tuple consisting of a binary relation (or operation) $R$ with its two arguments. The “TP-N2F encoder” uses two LSTMs to produce a pair consisting of a filler vector and a role vector, which are bound together with the tensor product. These tensor products, concatenated, comprise the “context” over which attention will operate in the decoder. The sum of the word-level TPRs, flattened to a vector, is treated as a representation of the entire problem statement; it is fed to the “Reasoning MLP”, which transforms this encoding of the problem into a vector encoding the solution. This is the initial state of the “TP-N2F decoder” attentional LSTM, which outputs at each time step an order-3 tensor representing a relational tuple. To generate a correct tuple from decoder operations, the model must learn to give the order-3 tensor the form of a TPR for a $(R \hspace{2.84526pt}A_1 \hspace{2.84526pt}A_2)$ tuple (detailed explanation in Sec. SECREF7). In the following sections, we first introduce the details of our proposed role-level description for N2F tasks, and then present how our proposed TP-N2F model uses TPR binding and unbinding operations to create a neural network implementation of this description of N2F tasks.
TP-N2F Model ::: Role-level description of N2F tasks
In this section, we propose a role-level description of N2F tasks, which specifies the filler/role structures of the input natural-language symbolic expressions and the output relational representations.
TP-N2F Model ::: Role-level description of N2F tasks ::: Role-level description for natural-language input
Instead of encoding each token of a sentence with a non-compositional embedding vector looked up in a learned dictionary, we use a learned role-filler decomposition to compose a tensor representation for each token. Given a sentence $S$ with $n$ word tokens $\lbrace w^0,w^1,...,w^{n-1}\rbrace $, each word token $w^t$ is assigned a learned role vector $^t$, soft-selected from the learned dictionary $$, and a learned filler vector $^t$, soft-selected from the learned dictionary $$ (Sec. SECREF2). The mechanism closely follows that of BIBREF7, and we hypothesize similar results: the role and filler approximately encode the grammatical role of the token and its lexical semantics, respectively. Then each word token $w^t$ is represented by the tensor product of the role vector and the filler vector: $^t=^t \otimes ^t$. In addition to the set of all its token embeddings $\lbrace ^0, \ldots , ^{n-1} \rbrace $, the sentence $S$ as a whole is assigned a TPR equal to the sum of the TPR embeddings of all its word tokens: $_S = \sum _{t=0}^{n-1} ^t$.
Using TPRs to encode natural language has several advantages. First, natural language TPRs can be interpreted by exploring the distribution of tokens grouped by the role and filler vectors they are assigned by a trained model (as in BIBREF7). Second, TPRs avoid the Bag of Word (BoW) confusion BIBREF8: the BoW encoding of Jay saw Kay is the same as the BoW encoding of Kay saw Jay but the encodings are different with TPR embedding, because the role filled by a symbol changes with its context.
TP-N2F Model ::: Role-level description of N2F tasks ::: Role-level description for relational representations
In this section, we propose a novel recursive role-level description for representing symbolic relational tuples. Each relational tuple contains a relation token and multiple argument tokens. Given a binary relation $rel$, a relational tuple can be written as $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$ where $arg_1,arg_2$ indicate two arguments of relation $rel$. Let us adopt the two positional roles, $p_i^{rel} = $ arg$_i$-of-$rel$ for $i=1,2$. The filler of role $p_i^{rel}$ is $arg_i$. Now let us use role decomposition recursively, noting that the role $p_i^{rel}$ can itself be decomposed into a sub-role $p_i = $ arg$_i$-of-$\underline{\hspace{5.69054pt}}$ which has a sub-filler $rel$. Suppose that $arg_i, rel, p_i$ are embedded as vectors $_i, , _i$. Then the TPR encoding of $p_i^{rel}$ is $_{rel} \otimes _i$, so the TPR encoding of filler $arg_i$ bound to role $p_i^{rel}$ is $_i \otimes (_{rel} \otimes _i)$. The tensor product is associative, so we can omit parentheses and write the TPR for the formal-language expression, the relational tuple $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$, as: = 1 rel 1 + 2 rel 2. Given the unbinding vectors $^{\prime }_i$ for positional role vectors $_i$ and the unbinding vector $^{\prime }_{rel}$ for the vector $_{rel}$ that embeds relation $rel$, each argument can be unbound in two steps as shown in Eqs. SECREF7–SECREF7. i' = [ 1 rel 1 + 2 rel 2 ] i' = i rel
[ i rel ] 'rel = i Here $\cdot $ denotes the tensor inner product, which for the order-3 $$ and order-1 $^{\prime }_i$ in Eq. SECREF7 can be defined as $[\cdot ^{\prime }_i]_{jk} = \sum _l []_{jkl} [^{\prime }_i]_l$; in Eq. SECREF7, $\cdot $ is equivalent to the matrix-vector product.
Our proposed scheme can be contrasted with the TPR scheme in which $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$ is embedded as $_{rel} \otimes _1 \otimes _2$ (e.g., BIBREF11, BIBREF12). In that scheme, an $n$-ary-relation tuple is embedded as an order-($n+1$) tensor, and unbinding an argument requires knowing all the other arguments (to use their unbinding vectors). In the scheme proposed here, an $n$-ary-relation tuple is still embedded as an order-3 tensor: there are just $n$ terms in the sum in Eq. SECREF7, using $n$ position vectors $_1, \dots , _n$; unbinding simply requires knowing the unbinding vectors for these fixed position vectors.
In the model, the order-3 tensor $$ of Eq. SECREF7 has a different status than the order-2 tensor $_S$ of Sec. SECREF5. $_S$ is a TPR by construction, whereas $$ is a TPR as a result of successful learning. To generate the output relational tuples, the decoder assumes each tuple has the form of Eq. SECREF7, and performs the unbinding operations which that structure calls for. In Appendix Sec. SECREF65, it is shown that, if unbinding each of a set of roles from some unknown tensor $$ gives a target set of fillers, then $$ must equal the TPR generated by those role/filler pairs, plus some tensor that is irrelevant because unbinding from it produces the zero vector. In other words, if the decoder succeeds in producing filler vectors that correspond to output relational tuples that match the target, then, as far as what the decoder can see, the tensor that it operates on is the TPR of Eq. SECREF7.
TP-N2F Model ::: Role-level description of N2F tasks ::: The TP-N2F Scheme for Learning the input-output mapping
To generate formal relational tuples from natural-language descriptions, a learning strategy for the mapping between the two structures is particularly important. As shown in (SECREF8), we formalize the learning scheme as learning a mapping function $f_{\mathrm {mapping}}(\cdot )$, which, given a structural representation of the natural-language input, $_S$, outputs a tensor $_F$ from which the structural representation of the output can be generated. At the role level of description, there's nothing more to be said about this mapping; how it is modeled at the neural network level is discussed in Sec. SECREF10. F = fmapping(S)
TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation
As shown in Figure FIGREF3, the TP-N2F model is implemented with three steps: encoding, mapping, and decoding. The encoding step is implemented by the TP-N2F natural-language encoder (TP-N2F Encoder), which takes the sequence of word tokens as inputs, and encodes them via TPR binding according to the TP-N2F role scheme for natural-language input given in Sec. SECREF5. The mapping step is implemented by an MLP called the Reasoning Module, which takes the encoding produced by the TP-N2F Encoder as input. It learns to map the natural-language-structure encoding of the input to a representation that will be processed under the assumption that it follows the role scheme for output relational-tuples specified in Sec. SECREF7: the model needs to learn to produce TPRs such that this processing generates correct output programs. The decoding step is implemented by the TP-N2F relational tuples decoder (TP-N2F Decoder), which takes the output from the Reasoning Module (Sec. SECREF8) and decodes the target sequence of relational tuples via TPR unbinding. The TP-N2F Decoder utilizes an attention mechanism over the individual-word TPRs $^t$ produced by the TP-N2F Encoder. The detailed implementations are introduced below.
TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation ::: The TP-N2F natural-language Encoder
The TP-N2F encoder follows the role scheme in Sec. SECREF5 to encode each word token $w^t$ by soft-selecting one of $n_{\mathrm {F}}$ fillers and one of $n_{\mathrm {R}}$ roles. The fillers and roles are embedded as vectors. These embedding vectors, and the functions for selecting fillers and roles, are learned by two LSTMs, the Filler-LSTM and the Role-LSTM. (See Figure FIGREF11.) At each time-step $t$, the Filler-LSTM and the Role-LSTM take a learned word-token embedding $^t$ as input. The hidden state of the Filler-LSTM, $_{\mathrm {F}}^t$, is used to compute softmax scores $u_k^{\mathrm {F}}$ over $n_{\mathrm {F}}$ filler slots, and a filler vector $^{t} = ^{\mathrm {F}}$ is computed from the softmax scores (recall from Sec. SECREF2 that $$ is the learned matrix of filler vectors). Similarly, a role vector is computed from the hidden state of the Role-LSTM, $_{\mathrm {R}}^t$. $f_{\mathrm {F}}$ and $f_{\mathrm {R}}$ denote the functions that generate $^{t}$ and $^t$ from the hidden states of the two LSTMs. The token $w^t$ is encoded as $^t$, the tensor product of $^{t}$ and $^t$. $^t$ replaces the hidden vector in each LSTM and is passed to the next time step, together with the LSTM cell-state vector $^t$: see (SECREF10)–(SECREF10). After encoding the whole sequence, the TP-N2F encoder outputs the sum of all tensor products $\sum _t ^t$ to the next module. We use an MLP, called the Reasoning MLP, for TPR mapping; it takes an order-2 TPR from the encoder and maps it to the initial state of the decoder. Detailed equations and implementation are provided in Sec. SECREF22 of the Appendix. Ft = fFiller-LSTM(t,t-1, Ft-1) Rt = fRole-LSTM(t,t-1, Rt-1)
t = t t = fF(Ft) fR(Rt)
TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation ::: The TP-N2F Relational-Tuple Decoder
The TP-N2F Decoder is an RNN that takes the output from the reasoning MLP as its initial hidden state for generating a sequence of relational tuples (Figure FIGREF13). This decoder contains an attentional LSTM called the Tuple-LSTM which feeds an unbinding module: attention operates on the context vector of the encoder, consisting of all individual encoder outputs $\lbrace ^t \rbrace $. The hidden-state $$ of the Tuple-LSTM is treated as a TPR of a relational tuple and is unbound to a relation and arguments. During training, the Tuple-LSTM needs to learn a way to make $$ suitably approximate a TPR. At each time step $t$, the hidden state $^t$ of the Tuple-LSTM with attention (The version in BIBREF13) (SECREF12) is fed as input to the unbinding module, which regards $^t$ as if it were the TPR of a relational tuple with $m$ arguments possessing the role structure described in Sec. SECREF7: $^t \approx \sum _{i=1}^{m} _{i}^t \otimes _{rel}^t \otimes _i$. (In Figure FIGREF13, the assumed hypothetical form of $^t$, as well as that of $_i^t$ below, is shown in a bubble with dashed border.) To decode a binary relational tuple, the unbinding module decodes it from $^t$ using the two steps of TPR unbinding given in (SECREF7)–(SECREF7). The positional unbinding vectors $^{\prime }_{i}$ are learned during training and shared across all time steps. After the first unbinding step (SECREF7), i.e., the inner product of $^t$ with $^{\prime }_i$, we get tensors $_{i}^t$ (SECREF12). These are treated as the TPRs of two arguments $_i^t$ bound to a relation $_{rel}^t$. A relational unbinding vector $_{rel}^{\prime t}$ is computed by a linear function from the sum of the $_{i}^t$ and used to compute the inner product with each $_i^t$ to yield $_i^t$, which are treated as the embedding of argument vectors (SECREF12). Based on the TPR theory, $_{rel}^{\prime t}$ is passed to a linear function to get $_{rel}^t$ as the embedding of a relation vector. Finally, the softmax probability distribution over symbolic outputs is computed for relations and arguments separately. In generation, the most probable symbol is selected. (Detailed equations are in Appendix Sec. SECREF42) t = Atten(fTuple-LSTM(relt,arg1t,arg2t,t-1,ct-1),[0,...,n-1])
1t = t 1' 2t = t 2'
rel't = flinear(1t + 2t) 1t = 1t rel't 2t = 2t rel't
TP-N2F Model ::: Inference and The Learning Strategy of the TP-N2F Model
During inference time, natural language questions are encoded via the encoder and the Reasoning MLP maps the output of the encoder to the input of the decoder. We use greedy decoding (selecting the most likely class) to decode one relation and its arguments. The relation and argument vectors are concatenated to construct a new vector as the input for the Tuple-LSTM in the next step.
TP-N2F is trained using back-propagation BIBREF14 with the Adam optimizer BIBREF15 and teacher-forcing. At each time step, the ground-truth relational tuple is provided as the input for the next time step. As the TP-N2F decoder decodes a relational tuple at each time step, the relation token is selected only from the relation vocabulary and the argument tokens from the argument vocabulary. For an input ${\mathcal {I}}$ that generates $N$ output relational tuples, the loss is the sum of the cross entropy loss ${\mathcal {L}}$ between the true labels $L$ and predicted tokens for relations and arguments as shown in (SECREF14). LI = i=0N-1L(reli, Lreli) + i=0N-1j=12L(argji, Largji)
EXPERIMENTS
The proposed TP-N2F model is evaluated on two N2F tasks, generating operation sequences to solve math problems and generating Lisp programs. In both tasks, TP-N2F achieves state-of-the-art performance. We further analyze the behavior of the unbinding relation vectors in the proposed model. Results of each task and the analysis of the unbinding relation vectors are introduced in turn. Details of experiments and datasets are described in Sec. SECREF20 in the Appendix.
EXPERIMENTS ::: Generating operation sequences to solve math problems
Given a natural-language math problem, we need to generate a sequence of operations (operators and corresponding arguments) from a set of operators and arguments to solve the given problem. Each operation is regarded as a relational tuple by viewing the operator as relation, e.g., $(add, n1, n2)$. We test TP-N2F for this task on the MathQA dataset BIBREF16. The MathQA dataset consists of about 37k math word problems, each with a corresponding list of multi-choice options and the corresponding operation sequence. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed with the execution script from BIBREF16 to select a multi-choice answer. As there are about 30% noisy data (where the execution script returns the wrong answer when given the ground-truth program; see Sec. SECREF20 of the Appendix), we report both execution accuracy (of the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). TP-N2F is compared to a baseline provided by the seq2prog model in BIBREF16, an LSTM-based seq2seq model with attention. Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results. To verify the importance of the TP-N2F encoder and decoder, we conducted experiments to replace either the encoder with a standard LSTM (denoted LSTM2TP) or the decoder with a standard attentional LSTM (denoted TP2LSTM). We observe that both the TPR components of TP-N2F are important for achieving the observed performance gain relative to the baseline.
EXPERIMENTS ::: Generating program trees from natural-language descriptions
Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations.
EXPERIMENTS ::: Interpretation of learned structure
To interpret the structure learned by the model, we extract the trained unbinding relation vectors from the TP-N2F Decoder and reduce the dimension of vectors via Principal Component Analysis. K-means clustering results on the average vectors are presented in Figure FIGREF71 and Figure FIGREF72 (in Appendix A.6). Results show that unbinding vectors for operators or functions with similar semantics tend to be close to each other. For example, with 5 clusters in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together, and operators related to square or volume of geometry are clustered together. With 4 clusters in the AlgoLisp dataset, partial/lambda functions and sort functions are in one cluster, and string processing functions are clustered together. Note that there is no direct supervision to inform the model about the nature of the operations, and the TP-N2F decoder has induced this role structure using weak supervision signals from question/operation-sequence-answer pairs. More clustering results are presented in the Appendix A.6.
Related work
N2F tasks include many different subtasks such as symbolic reasoning or semantic parsing BIBREF19, BIBREF20, BIBREF21, BIBREF16, BIBREF17, BIBREF18. These tasks require models with strong structure-learning ability. TPR is a promising technique for encoding symbolic structural information and modeling symbolic reasoning in vector space. TPR binding has been used for encoding and exploring grammatical structural information of natural language BIBREF7, BIBREF9. TPR unbinding has also been used to generate natural language captions from images BIBREF8. Some researchers use TPRs for modeling deductive reasoning processes both on a rule-based model and deep learning models in vector space BIBREF22, BIBREF11, BIBREF12. However, none of these previous models takes advantage of combining TPR binding and TPR unbinding to learn structure representation mappings explicitly, as done in our model. Although researchers are paying increasing attention to N2F tasks, most of the proposed models either do not encode structural information explicitly or are specialized to particular tasks. Our proposed TP-N2F neural model can be applied to many tasks.
CONCLUSION AND FUTURE WORK
In this paper we propose a new scheme for neural-symbolic relational representations and a new architecture, TP-N2F, for formal-language generation from natural-language descriptions. To our knowledge, TP-N2F is the first model that combines TPR binding and TPR unbinding in the encoder-decoder fashion. TP-N2F achieves the state-of-the-art on two instances of N2F tasks, showing significant structure learning ability. The results show that both the TP-N2F encoder and the TP-N2F decoder are important for improving natural- to formal-language generation. We believe that the interpretation and symbolic structure encoding of TPRs are a promising direction for future work. We also plan to combine large-scale deep learning models such as BERT with TP-N2F to take advantage of structure learning for other generation tasks.
Appendix ::: Implementations of TP-N2F for experiments
In this section, we present details of the experiments of TP-N2F on the two datasets. We present the implementation of TP-N2F on each dataset.
The MathQA dataset consists of about 37k math word problems ((80/12/8)% training/dev/testing problems), each with a corresponding list of multi-choice options and an straight-line operation sequence program to solve the problem. An example from the dataset is presented in the Appendix A.4. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed to generate the solution for the given math problem. We use the execution script from BIBREF16 to execute the generated operation sequence and compute the multi-choice accuracy for each problem. During our experiments we observed that there are about 30% noisy examples (on which the execution script fails to get the correct answer on the ground truth program). Therefore, we report both execution accuracy (the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly).
The AlgoLisp dataset BIBREF17 is a program synthesis dataset, which has 79k/9k/10k training/dev/testing samples. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of commands from leaves to root and (as in MathQA) use the symbol $\#_i$ to indicate the result of the $i^{\mathrm {th}}$ command (generated previously by the model). A dataset sample with our parsed command sequence is presented in the Appendix A.4. AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: accuracy of passing all test cases (Acc), accuracy of passing 50% of test cases (50p-Acc), and accuracy of generating an exactly matched program (M-Acc). AlgoLisp has about 10% noise data (where the execution script fails to pass all test cases on the ground truth program), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed).
We use $d_{\mathrm {R}}, n_{\mathrm {R}}, d_{\mathrm {F}}, n_{\mathrm {F}}$ to indicate the TP-N2F encoder hyperparameters, the dimension of role vectors, the number of roles, the dimension of filler vectors and the number of fillers. $d_{Rel}, d_{Arg},d_{Pos}$ indicate the TP-N2F decoder hyper-parameters, the dimension of relation vectors, the dimension of argument vectors, and the dimension of position vectors.
In the experiment on the MathQA dataset, we use $n_{\mathrm {F}}= 150$, $n_{\mathrm {R}}= 50$, $d_{\mathrm {F}}= 30$, $d_{\mathrm {R}}= 20$, $d_{Rel} = 20$, $d_{Arg} = 10$, $d_{Pos} = 5$ and we train the model for 60 epochs with learning rate 0.00115. The reasoning module only contains one layer. As most of the math operators in this dataset are binary, we replace all operators taking three arguments with a set of binary operators based on hand-encoded rules, and for all operators taking one argument, a padding symbol is appended. For the baseline SEQ2PROG-orig, TP2LSTM and LSTM2TP, we use hidden size 100, single-direction, one-layer LSTM. For the SEQ2PROG-best, we performed a hyperparameter search on the hidden size for both encoder and decoder; the best score is reported.
In the experiment on the AlgoLisp dataset, we use $n_{\mathrm {F}}= 150$, $n_{\mathrm {R}}= 50$, $d_{\mathrm {F}}= 30$, $d_{\mathrm {R}}= 30$, $d_{Rel} = 30$, $d_{Arg} = 20$, $d_{Pos} = 5$ and we train the model for 50 epochs with learning rate 0.00115. We also use one-layer in the reasoning module like in MathQA. For this dataset, most function calls take three arguments so we simply add padding symbols for those functions with fewer than three arguments.
Appendix ::: Detailed equations of TP-N2F ::: TP-N2F encoder
Filler-LSTM in TP-N2F encoder
This is a standard LSTM, governed by the equations:
$\varphi , \tanh $ are the logistic sigmoid and tanh functions applied elementwise. $\flat $ flattens (reshapes) a matrix in $^{d_{\mathrm {F}} \times d_{\mathrm {R}}}$ into a vector in $^{d_{\mathrm {T}}}$, where $d_{\mathrm {T}} = d_{\mathrm {F}} d_{\mathrm {R}}$. $\odot $ is elementwise multiplication. The variables have the following dimensions: ft, ft, ft, ft, ft, ft, ff, fg, fi, fo, ♭(t-1) RdT
wt Rd
ff, fg, fi, fo RdT d
ff, fg, fi, fo RdT dT
Filler vector
The filler vector for input token $w^t$ is $^t$, defined through an attention vector over possible fillers, $_{\mathrm {f}}^t$:
($W_{\mathrm {f}}$ is the same as $$ of Sec. SECREF2.) The variables' dimensions are: fa RnF dT
ft RnF
f RdF nF
t RdF $T$ is the temperature factor, which is fixed at 0.1.
Role-LSTM in TP-N2F encoder
Similar to the Filler-LSTM, the Role-LSTM is also a standard LSTM, governed by the equations:
The variable dimensions are: rt, rt, rt, rt, rt, rt, rf, rg, ri, ro, ♭(t-1) RdT
wt Rd
rf, rg, ri, ro RdT d
rf, rg, ri, ro RdT dT
Role vector
The role vector for input token $w^t$ is determined analogously to its filler vector:
The dimensions are: ra RnR dT
rt RnR
r RdR nR
t RdR
Binding
The TPR for the filler/role binding for token $w^t$ is then:
where t RdR dF
Appendix ::: Detailed equations of TP-N2F ::: Structure Mapping
$^0 \in \mathbb {R}^{d_{\mathrm {H}}}$, where $d_{\mathrm {H}} = d_{\mathrm {A}}, d_{\mathrm {O}}, d_{\mathrm {P}}$ are dimension of argument vector, operator vector and position vector. $f_{\mathrm {mapping}}$ is implemented with a MLP (linear layer followed by a tanh) for mapping the $_t \in \mathbb {R}^{d_{\mathrm {T}}}$ to the initial state of decoder $^0$.
Appendix ::: Detailed equations of TP-N2F ::: TP-N2F decoder
Tuple-LSTM
The output tuples are also generated via a standard LSTM:
Here, $\gamma $ is the concatenation function. $_{Rel}^{t-1}$ is the trained embedding vector for the Relation of the input binary tuple, $_{Arg1}^{t-1}$ is the embedding vector for the first argument and $_{Arg2}^{t-1}$ is the embedding vector for the second argument. Then the input for the Tuple LSTM is the concatenation of the embedding vectors of relation and arguments, with dimension $d_{\mathrm {dec}}$. t, t, t, t, t, inputt, f, g, i, o, ♭(t-1) RdH
dt Rddec
f, g, i, o RdH ddec
f, g, i, o RdH dH
t RdH ${\mathrm {Atten}}$ is the attention mechanism used in BIBREF13, which computes the dot product between $_{\mathrm {input}}^t$ and each $_{t^{\prime }}$. Then a linear function is used on the concatenation of $_{\mathrm {input}}^t$ and the softmax scores on all dot products to generate $^t$. The following equations show the attention mechanism:
${\mathrm {score}}$ is the score function of the attention. In this paper, the score function is dot product. T RdH n
t Rn
t RdH
RdH (dT+n)
Unbinding
At each timestep $t$, the 2-step unbinding process described in Sec. SECREF7 operates first on an encoding of the triple as a whole, $$, using two unbinding vectors $_i^{\prime }$ that are learned but fixed for all tuples. This first unbinding gives an encoding of the two operator-argument bindings, $_i$. The second unbinding operates on the $_i$, using a generated unbinding vector for the operator, $_{rel}^{\prime }$, giving encodings of the arguments, $_i$. The generated unbinding vector for the operator, $^{\prime }$, and the generated encodings of the arguments, $_i$, each produce a probability distribution over symbolic operator outputs $Rel$ and symbolic argument outputs $Arg_i$; these probabilities are used in the cross-entropy loss function. For generating a single symbolic output, the most-probable symbols are selected.
The dimensions are: rel't RdO
1t, 2t RdA
'1, '2 RdP
1t, 2t RdA dO
dual RdH
rt RnO dO
at RnA dA
rt RnR
a1t, a2t RnA
Appendix ::: The tensor that is input to the decoder's Unbinding Module is a TPR
Here we show that, if learning is successful, the order-3 tensor $$ that each iteration of the decoder's Tuple LSTM feeds to the decoder's Unbinding Module (Figure FIGREF13) will be a TPR of the form assumed in Eq. SECREF7, repeated here: = j j rel j. The operations performed by the decoder are given in Eqs. SECREF7–SECREF7, and Eqs. SECREF12–SECREF12, rewritten here: i' = i
i rel' = i This is the standard TPR unbinding operation, used recursively: first with the unbinding vectors for positions, $_i^{\prime }$, then with the unbinding vector for the operator, $_{rel}^{\prime }$. It therefore suffices to analyze a single unbinding; the result can then be used recursively. This in effect reduces the problem to the order-2 case. What we will show is: given a set of unbinding vectors $\lbrace _i^{\prime } \rbrace $ which are dual to a set of role vectors $\lbrace _i \rbrace $, with $i$ ranging over some index set $I$, if $$ is an order-2 tensor such that 'i = i, i I then = i I i i + TPR + for some tensor $$ that annihilates all the unbinding vectors: 'i = 0, i I. If learning is successful, the processing in the decoder will generate the target relational tuple $(R, A_1, A_2)$ by obeying Eq. SECREF65 in the first unbinding, where we have $_i^{\prime } = _i^{\prime }, _i = _i, I = \lbrace 1, 2\rbrace $, and obeying Eq. SECREF65 in the second unbinding, where we have $_i^{\prime } = _{rel}^{\prime }, _i^{\prime } = _i$, with $I =$ the set containing only the null index.
Treat rank-2 tensors as matrices; then unbinding is simply matrix-vector multiplication. Assume the set of unbinding vectors is linearly independent (otherwise there would in general be no way to satisfy Eq. SECREF65 exactly, contrary to assumption). Then expand the set of unbinding vectors, if necessary, into a basis $\lbrace ^{\prime }_k\rbrace _{k \in K \supseteq I}$. Find the dual basis, with $_k$ dual to $^{\prime }_k$ (so that $_l^\top _j^{\prime } = \delta _{lj}$). Because $\lbrace ^{\prime }_k\rbrace _{k \in K}$ is a basis, so is $\lbrace _k\rbrace _{k \in K}$, so any matrix $$ can be expanded as $= \sum _{k \in K} _k _k^{\top }$. Since $^{\prime }_i = _i, \forall i \in I$ are the unbinding conditions (Eq. SECREF65), we must have $_i = _i, i \in I$. Let $_{{\mathrm {TPR}}} \equiv \sum _{i \in I} _i _i^{\top }$. This is the desired TPR, with fillers $_i$ bound to the role vectors $_i$ which are the duals of the unbinding vectors $_i^{\prime }$ ($i \in I$). Then we have $= _{{\mathrm {TPR}}} + $ (Eq. SECREF65) where $\equiv \sum _{j \in K, j \notin I} _j _j^{\top }$; so $_i^{\prime } = {\mathbf {0}}, i \in I$ (Eq. SECREF65). Thus, if training is successful, the model must have learned how to feed the decoder with order-3 TPRs with the structure posited in Eq. SECREF65.
The argument so far addresses the case where the unbinding vectors are linearly independent, making it possible to satisfy Eq. SECREF65 exactly. In relatively high-dimensional vector spaces, it will often happen that even when the number of unbinding vectors exceeds the dimension of their space by a factor of 2 or 3 (which applies to the TP-N2F models presented here), there is a set of role vectors $\lbrace _k \rbrace _{k \in K}$ approximately dual to $\lbrace ^{\prime }_k \rbrace _{k \in K}$, such that $_l^\top _j^{\prime } = \delta _{lj} \hspace{2.84526pt}\forall l, j \in K$ holds to a good approximation. (If the distribution of normalized unbinding vectors is approximately uniform on the unit sphere, then choosing the approximate dual vectors to equal the unbinding vectors themselves will do, since they will be nearly orthonormal BIBREF10. If the $\lbrace ^{\prime }_k \rbrace _{k \in K}$ are not normalized, we just rescale the role vectors, choosing $_k = _k^{\prime } / \Vert _k^{\prime } \Vert ^2$.) When the number of such role vectors exceeds the dimension of the embedding space, they will be overcomplete, so while it is still true that any matrix $$ can be expanded as above ($= \sum _{k \in K} _k _k^{\top }$), this expansion will no longer be unique. So while it remains true that $$ a TPR, it is no longer uniquely decomposable into filler/role pairs. The claim above does not claim uniqueness in this sense, and remains true.)
Appendix ::: Dataset samples ::: Data sample from MathQA dataset
Problem: The present polulation of a town is 3888. Population increase rate is 20%. Find the population of town after 1 year?
Options: a) 2500, b) 2100, c) 3500, d) 3600, e) 2700
Operations: multiply(n0,n1), divide(#0,const-100), add(n0,#1)
Appendix ::: Dataset samples ::: Data sample from AlgoLisp dataset
Problem: Consider an array of numbers and a number, decrements each element in the given array by the given number, what is the given array?
Program Nested List: (map a (partial1 b –))
Command-Sequence: (partial1 b –), (map a #0)
Appendix ::: Generated programs comparison
In this section, we display some generated samples from the two datasets, where the TP-N2F model generates correct programs but LSTM-Seq2Seq does not.
Question: A train running at the speed of 50 km per hour crosses a post in 4 seconds. What is the length of the train?
TP-N2F(correct):
(multiply,n0,const1000) (divide,#0,const3600) (multiply,n1,#1)
LSTM(wrong):
(multiply,n0,const0.2778) (multiply,n1,#0)
Question: 20 is subtracted from 60 percent of a number, the result is 88. Find the number?
TP-N2F(correct):
(add,n0,n2) (divide,n1,const100) (divide,#0,#1)
LSTM(wrong):
(add,n0,n2) (divide,n1,const100) (divide,#0,#1) (multiply,#2,n3) (subtract,#3,n0)
Question: The population of a village is 14300. It increases annually at the rate of 15 percent. What will be its population after 2 years?
TP-N2F(correct):
(divide,n1,const100) (add,#0,const1) (power,#1,n2) (multiply,n0,#2)
LSTM(wrong):
(multiply,const4,const100) (sqrt,#0)
Question: There are two groups of students in the sixth grade. There are 45 students in group a, and 55 students in group b. If, on a particular day, 20 percent of the students in group a forget their homework, and 40 percent of the students in group b forget their homework, then what percentage of the sixth graders forgot their homework?
TP-N2F(correct):
(add,n0,n1) (multiply,n0,n2) (multiply,n1,n3) (divide,#1,const100) (divide,#2,const100) (add,#3,#4) (divide,#5,#0) (multiply,#6,const100)
LSTM(wrong):
(multiply,n0,n1) (subtract,n0,n1) (divide,#0,#1)
Question: 1 divided by 0.05 is equal to
TP-N2F(correct):
(divide,n0,n1)
LSTM(wrong):
(divide,n0,n1) (multiply,n2,#0)
Question: Consider a number a, compute factorial of a
TP-N2F(correct):
( <=,arg1,1 ) ( -,arg1,1 ) ( self,#1 ) ( *,#2,arg1 ) ( if,#0,1,#3 ) ( lambda1,#4 ) ( invoke1,#5,a )
LSTM(wrong):
( <=,arg1,1 ) ( -,arg1,1 ) ( self,#1 ) ( *,#2,arg1 ) ( if,#0,1,#3 ) ( lambda1,#4 ) ( len,a ) ( invoke1,#5,#6 )
Question: Given an array of numbers and numbers b and c, add c to elements of the product of elements of the given array and b, what is the product of elements of the given array and b?
TP-N2F(correct):
( partial, b,* ) ( partial1,c,+ ) ( map,a,#0 ) ( map,#2,#1 )
LSTM(wrong):
( partial1,b,+ ) ( partial1,c,+ ) ( map,a,#0 ) ( map,#2,#1 )
Question: You are given an array of numbers a and numbers b , c and d , let how many times you can replace the median in a with sum of its digits before it becomes a single digit number and b be the coordinates of one end and c and d be the coordinates of another end of segment e , your task is to find the length of segment e rounded down
TP-N2F(correct):
( digits arg1 ) ( len #0 ) ( == #1 1 ) ( digits arg1 ) ( reduce #3 0 + ) ( self #4 ) ( + 1 #5 ) ( if #2 0 #6 ) ( lambda1 #7 ) ( sort a ) ( len a ) ( / #10 2 ) ( deref #9 #11 ) ( invoke1 #8 #12 ) ( - #13 c ) ( digits arg1 ) ( len #15 ) ( == #16 1 ) ( digits arg1 ) ( reduce #18 0 + ) ( self #19 ) ( + 1 #20 ) ( if #17 0 #21 ) ( lambda1 #22 ) ( sort a ) ( len a ) ( / #25 2 ) ( deref #24 #26 ) ( invoke1 #23 #27 ) ( - #28 c ) ( * #14 #29 ) ( - b d ) ( - b d ) ( * #31 #32 ) ( + #30 #33 ) ( sqrt #34 ) ( floor #35 )
LSTM(wrong): ( digits arg1 ) ( len #0 ) ( == #1 1 ) ( digits arg1 ) ( reduce #3 0 + ) ( self #4 ) ( + 1 #5 ) ( if #2 0 #6 ) ( lambda1 #7 ) ( sort a ) ( len a ) ( / #10 2 ) ( deref #9 #11 ) ( invoke1 #8 #12 c ) ( - #13 ) ( - b d ) ( - b d ) ( * #15 #16 ) ( * #14 #17 ) ( + #18 ) ( sqrt #19 ) ( floor #20 )
Question: Given numbers a , b , c and e , let d be c , reverse digits in d , let a and the number in the range from 1 to b inclusive that has the maximum value when its digits are reversed be the coordinates of one end and d and e be the coordinates of another end of segment f , find the length of segment f squared
TP-N2F(correct):
( digits c ) ( reverse #0 ) ( * arg1 10 ) ( + #2 arg2 ) ( lambda2 #3 ) ( reduce #1 0 #4 ) ( - a #5 ) ( digits c ) ( reverse #7 ) ( * arg1 10 ) ( + #9 arg2 ) ( lambda2 #10 ) ( reduce #8 0 #11 ) ( - a #12 ) ( * #6 #13 ) ( + b 1 ) ( range 0 #15 ) ( digits arg1 ) ( reverse #17 ) ( * arg1 10 ) ( + #19 arg2 ) ( lambda2 #20 ) ( reduce #18 0 #21 ) ( digits arg2 ) ( reverse #23 ) ( * arg1 10 ) ( + #25 arg2 ) ( lambda2 #26 ) ( reduce #24 0 #27 ) ( > #22 #28 ) ( if #29 arg1 arg2 ) ( lambda2 #30 ) ( reduce #16 0 #31 ) ( - #32 e ) ( + b 1 ) ( range 0 #34 ) ( digits arg1 ) ( reverse #36 ) ( * arg1 10 ) ( + #38 arg2 ) ( lambda2 #39 ) ( reduce #37 0 #40 ) ( digits arg2 ) ( reverse #42 ) ( * arg1 10 ) ( + #44 arg2 ) ( lambda2 #45 ) ( reduce #43 0 #46 ) ( > #41 #47 ) ( if #48 arg1 arg2 ) ( lambda2 #49 ) ( reduce #35 0 #50 ) ( - #51 e ) ( * #33 #52 ) ( + #14 #53 )
LSTM(wrong):
( - a d ) ( - a d ) ( * #0 #1 ) ( digits c ) ( reverse #3 ) ( * arg1 10 ) ( + #5 arg2 ) ( lambda2 #6 ) ( reduce #4 0 #7 ) ( - #8 e ) ( + b 1 ) ( range 0 #10 ) ( digits arg1 ) ( reverse #12 ) ( * arg1 10 ) ( + #14 arg2 ) ( lambda2 #15 ) ( reduce #13 0 #16 ) ( digits arg2 ) ( reverse #18 ) ( * arg1 10 ) ( + #20 arg2 ) ( lambda2 #21 ) ( reduce #19 0 #22 ) ( > #17 #23 ) ( if #24 arg1 arg2 ) ( lambda2 #25 ) ( reduce #11 0 #26 ) ( - #27 e ) ( * #9 #28 ) ( + #2 #29 )
Appendix ::: Unbinding relation vector clustering
We run K-means clustering on both datasets with $k = 3,4,5,6$ clusters and the results are displayed in Figure FIGREF71 and Figure FIGREF72. As described before, unbinding-vectors for operators or functions with similar semantics tend to be closer to each other. For example, in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together at middle, and operators related to geometry such as square or volume are clustered together at bottom left. In AlgoLisp dataset, basic arithmetic functions are clustered at middle, and string processing functions are clustered at right. | Full Testing Set Accuracy: 84.02
Cleaned Testing Set Accuracy: 93.48 |
05671d068679be259493df638d27c106e7dd36d0 | 05671d068679be259493df638d27c106e7dd36d0_0 | Q: What is the performance proposed model achieved on MathQA?
Text: INTRODUCTION
When people perform explicit reasoning, they can typically describe the way to the conclusion step by step via relational descriptions. There is ample evidence that relational representations are important for human cognition (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4). Although a rapidly growing number of researchers use deep learning to solve complex symbolic reasoning and language tasks (a recent review is BIBREF5), most existing deep learning models, including sequence models such as LSTMs, do not explicitly capture human-like relational structure information.
In this paper we propose a novel neural architecture, TP-N2F, to solve natural- to formal-language generation tasks (N2F). In the tasks we study, math or programming problems are stated in natural-language, and answers are given as programs, sequences of relational representations, to solve the problem. TP-N2F encodes the natural-language symbolic structure of the problem in an input vector space, maps this to a vector in an intermediate space, and uses that vector to produce a sequence of output vectors that are decoded as relational structures. Both input and output structures are modelled as Tensor Product Representations (TPRs) BIBREF6. During encoding, NL-input symbolic structures are encoded as vector space embeddings using TPR `binding' (following BIBREF7); during decoding, symbolic constituents are extracted from structure-embedding output vectors using TPR `unbinding' (following BIBREF8, BIBREF9).
Our contributions in this work are as follows. (i) We propose a role-level analysis of N2F tasks. (ii) We present a new TP-N2F model which gives a neural-network-level implementation of a model solving the N2F task under the role-level description proposed in (i). To our knowledge, this is the first model to be proposed which combines both the binding and unbinding operations of TPRs to achieve generation tasks through deep learning. (iii) State-of-the-art performance on two recently developed N2F tasks shows that the TP-N2F model has significant structure learning ability on tasks requiring symbolic reasoning through program synthesis.
Background: Review of Tensor-Product Representation
The TPR mechanism is a method to create a vector space embedding of complex symbolic structures. The type of a symbol structure is defined by a set of structural positions or roles, such as the left-child-of-root position in a tree, or the second-argument-of-$R$ position of a given relation $R$. In a particular instance of a structural type, each of these roles may be occupied by a particular filler, which can be an atomic symbol or a substructure (e.g., the entire left sub-tree of a binary tree can serve as the filler of the role left-child-of-root). For now, we assume the fillers to be atomic symbols.
The TPR embedding of a symbol structure is the sum of the embeddings of all its constituents, each constituent comprising a role together with its filler. The embedding of a constituent is constructed from the embedding of a role and the embedding of the filler of that role: these are joined together by the TPR `binding' operation, the tensor (or generalized outer) product $\otimes $.
Formally, suppose a symbolic type is defined by the roles $\lbrace r_i \rbrace $, and suppose that in a particular instance of that type, ${S}$, role $r_i$ is bound by filler $f_i$. The TPR embedding of ${S}$ is the order-2 tensor = i i i = i i i where $\lbrace _i \rbrace $ are vector embeddings of the fillers and $\lbrace _i \rbrace $ are vector embeddings of the roles. In Eq. SECREF2, and below, for notational simplicity we conflate order-2 tensors and matrices.
As a simple example, consider the symbolic type string, and choose roles to be $r_1 = $ first_element, $r_2 = $ second_element, etc. Then in the specific string S = cba, the first role $r_1$ is filled by c, and $r_2$ and $r_3$ by b and a, respectively. The TPR for S is $\otimes _1 + \otimes _2 + \otimes _3$, where $, , $ are the vector embeddings of the symbols a, b, c, and $_i$ is the vector embedding of role $r_i$.
A TPR scheme for embedding a set of symbol structures is defined by a decomposition of those structures into roles bound to fillers, an embedding of each role as a role vector, and an embedding of each filler as a filler vector. Let the total number of roles and fillers available be $n_{\mathrm {R}}, n_{\mathrm {F}}$, respectively. Define the matrix of all possible role vectors to be $\in ^{d_{\mathrm {R}}\times n_{\mathrm {R}}}$, with column $i$, $[]_{:i} = _i \in ^{d_{\mathrm {R}}}$, comprising the embedding of $r_i$. Similarly let $\in ^{d_{\mathrm {F}}\times n_{\mathrm {F}}}$ be the matrix of all possible filler vectors. The TPR $\in ^{d_{\mathrm {F}}\times d_{\mathrm {R}}}$. Below, $d_{\mathrm {R}}, n_{\mathrm {R}}, d_{\mathrm {F}}, n_{\mathrm {F}}$ will be hyper-parameters, while $, $ will be learned parameter matrices.
Using summation in Eq.SECREF2 to combine the vectors embedding the constituents of a structure risks non-recoverability of those constituents given the embedding $$ of the the structure as a whole. The tensor product is chosen as the binding operation in order to enable recovery of the filler of any role in a structure ${S}$ given its TPR $$. This can be done with perfect precision if the embeddings of the roles are linearly independent. In that case the role matrix $$ has a left inverse $$: $= $. Now define the unbinding (or dual) vector for role $r_j$, $_j$, to be the $j^{{\mathrm {th}}}$ column of $^\top $: $U_{:j}^\top $. Then, since $[]_{ji} = []_{ji} = _{j:} _{:i} = [^\top _{:j}]^\top _{:i} =_j^\top _i = _i^\top _j$, we have $_i^\top _j = \delta _{ji}$. This means that, to recover the filler of $r_j$ in the structure with TPR $$, we can take its tensor inner product (or matrix-vector product) with $_j$: j = [ i i i] j = i i ij = j
In the architecture proposed here, we will make use of both TPR binding using the tensor product with role vectors $_i$ and TPR unbinding using the tensor inner product with unbinding vectors $_j$. Binding will be used to produce the order-2 tensor $_S$ embedding of the NL problem statement. Unbinding will be used to generate output relational tuples from an order-3 tensor $$. Because they pertain to different representations (of different orders in fact), the binding and unbinding vectors we will use are not related to one another.
TP-N2F Model
We propose a general TP-N2F neural network architecture operating over TPRs to solve N2F tasks under a proposed role-level description of those tasks. In this description, natural-language input is represented as a straightforward order-2 role structure, and formal-language relational representations of outputs are represented with a new order-3 recursive role structure proposed here. Figure FIGREF3 shows an overview diagram of the TP-N2F model. It depicts the following high-level description.
As shown in Figure FIGREF3, while the natural-language input is a sequence of words, the output is a sequence of multi-argument relational tuples such as $(R \hspace{2.84526pt}A_1 \hspace{2.84526pt}A_2)$, a 3-tuple consisting of a binary relation (or operation) $R$ with its two arguments. The “TP-N2F encoder” uses two LSTMs to produce a pair consisting of a filler vector and a role vector, which are bound together with the tensor product. These tensor products, concatenated, comprise the “context” over which attention will operate in the decoder. The sum of the word-level TPRs, flattened to a vector, is treated as a representation of the entire problem statement; it is fed to the “Reasoning MLP”, which transforms this encoding of the problem into a vector encoding the solution. This is the initial state of the “TP-N2F decoder” attentional LSTM, which outputs at each time step an order-3 tensor representing a relational tuple. To generate a correct tuple from decoder operations, the model must learn to give the order-3 tensor the form of a TPR for a $(R \hspace{2.84526pt}A_1 \hspace{2.84526pt}A_2)$ tuple (detailed explanation in Sec. SECREF7). In the following sections, we first introduce the details of our proposed role-level description for N2F tasks, and then present how our proposed TP-N2F model uses TPR binding and unbinding operations to create a neural network implementation of this description of N2F tasks.
TP-N2F Model ::: Role-level description of N2F tasks
In this section, we propose a role-level description of N2F tasks, which specifies the filler/role structures of the input natural-language symbolic expressions and the output relational representations.
TP-N2F Model ::: Role-level description of N2F tasks ::: Role-level description for natural-language input
Instead of encoding each token of a sentence with a non-compositional embedding vector looked up in a learned dictionary, we use a learned role-filler decomposition to compose a tensor representation for each token. Given a sentence $S$ with $n$ word tokens $\lbrace w^0,w^1,...,w^{n-1}\rbrace $, each word token $w^t$ is assigned a learned role vector $^t$, soft-selected from the learned dictionary $$, and a learned filler vector $^t$, soft-selected from the learned dictionary $$ (Sec. SECREF2). The mechanism closely follows that of BIBREF7, and we hypothesize similar results: the role and filler approximately encode the grammatical role of the token and its lexical semantics, respectively. Then each word token $w^t$ is represented by the tensor product of the role vector and the filler vector: $^t=^t \otimes ^t$. In addition to the set of all its token embeddings $\lbrace ^0, \ldots , ^{n-1} \rbrace $, the sentence $S$ as a whole is assigned a TPR equal to the sum of the TPR embeddings of all its word tokens: $_S = \sum _{t=0}^{n-1} ^t$.
Using TPRs to encode natural language has several advantages. First, natural language TPRs can be interpreted by exploring the distribution of tokens grouped by the role and filler vectors they are assigned by a trained model (as in BIBREF7). Second, TPRs avoid the Bag of Word (BoW) confusion BIBREF8: the BoW encoding of Jay saw Kay is the same as the BoW encoding of Kay saw Jay but the encodings are different with TPR embedding, because the role filled by a symbol changes with its context.
TP-N2F Model ::: Role-level description of N2F tasks ::: Role-level description for relational representations
In this section, we propose a novel recursive role-level description for representing symbolic relational tuples. Each relational tuple contains a relation token and multiple argument tokens. Given a binary relation $rel$, a relational tuple can be written as $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$ where $arg_1,arg_2$ indicate two arguments of relation $rel$. Let us adopt the two positional roles, $p_i^{rel} = $ arg$_i$-of-$rel$ for $i=1,2$. The filler of role $p_i^{rel}$ is $arg_i$. Now let us use role decomposition recursively, noting that the role $p_i^{rel}$ can itself be decomposed into a sub-role $p_i = $ arg$_i$-of-$\underline{\hspace{5.69054pt}}$ which has a sub-filler $rel$. Suppose that $arg_i, rel, p_i$ are embedded as vectors $_i, , _i$. Then the TPR encoding of $p_i^{rel}$ is $_{rel} \otimes _i$, so the TPR encoding of filler $arg_i$ bound to role $p_i^{rel}$ is $_i \otimes (_{rel} \otimes _i)$. The tensor product is associative, so we can omit parentheses and write the TPR for the formal-language expression, the relational tuple $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$, as: = 1 rel 1 + 2 rel 2. Given the unbinding vectors $^{\prime }_i$ for positional role vectors $_i$ and the unbinding vector $^{\prime }_{rel}$ for the vector $_{rel}$ that embeds relation $rel$, each argument can be unbound in two steps as shown in Eqs. SECREF7–SECREF7. i' = [ 1 rel 1 + 2 rel 2 ] i' = i rel
[ i rel ] 'rel = i Here $\cdot $ denotes the tensor inner product, which for the order-3 $$ and order-1 $^{\prime }_i$ in Eq. SECREF7 can be defined as $[\cdot ^{\prime }_i]_{jk} = \sum _l []_{jkl} [^{\prime }_i]_l$; in Eq. SECREF7, $\cdot $ is equivalent to the matrix-vector product.
Our proposed scheme can be contrasted with the TPR scheme in which $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$ is embedded as $_{rel} \otimes _1 \otimes _2$ (e.g., BIBREF11, BIBREF12). In that scheme, an $n$-ary-relation tuple is embedded as an order-($n+1$) tensor, and unbinding an argument requires knowing all the other arguments (to use their unbinding vectors). In the scheme proposed here, an $n$-ary-relation tuple is still embedded as an order-3 tensor: there are just $n$ terms in the sum in Eq. SECREF7, using $n$ position vectors $_1, \dots , _n$; unbinding simply requires knowing the unbinding vectors for these fixed position vectors.
In the model, the order-3 tensor $$ of Eq. SECREF7 has a different status than the order-2 tensor $_S$ of Sec. SECREF5. $_S$ is a TPR by construction, whereas $$ is a TPR as a result of successful learning. To generate the output relational tuples, the decoder assumes each tuple has the form of Eq. SECREF7, and performs the unbinding operations which that structure calls for. In Appendix Sec. SECREF65, it is shown that, if unbinding each of a set of roles from some unknown tensor $$ gives a target set of fillers, then $$ must equal the TPR generated by those role/filler pairs, plus some tensor that is irrelevant because unbinding from it produces the zero vector. In other words, if the decoder succeeds in producing filler vectors that correspond to output relational tuples that match the target, then, as far as what the decoder can see, the tensor that it operates on is the TPR of Eq. SECREF7.
TP-N2F Model ::: Role-level description of N2F tasks ::: The TP-N2F Scheme for Learning the input-output mapping
To generate formal relational tuples from natural-language descriptions, a learning strategy for the mapping between the two structures is particularly important. As shown in (SECREF8), we formalize the learning scheme as learning a mapping function $f_{\mathrm {mapping}}(\cdot )$, which, given a structural representation of the natural-language input, $_S$, outputs a tensor $_F$ from which the structural representation of the output can be generated. At the role level of description, there's nothing more to be said about this mapping; how it is modeled at the neural network level is discussed in Sec. SECREF10. F = fmapping(S)
TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation
As shown in Figure FIGREF3, the TP-N2F model is implemented with three steps: encoding, mapping, and decoding. The encoding step is implemented by the TP-N2F natural-language encoder (TP-N2F Encoder), which takes the sequence of word tokens as inputs, and encodes them via TPR binding according to the TP-N2F role scheme for natural-language input given in Sec. SECREF5. The mapping step is implemented by an MLP called the Reasoning Module, which takes the encoding produced by the TP-N2F Encoder as input. It learns to map the natural-language-structure encoding of the input to a representation that will be processed under the assumption that it follows the role scheme for output relational-tuples specified in Sec. SECREF7: the model needs to learn to produce TPRs such that this processing generates correct output programs. The decoding step is implemented by the TP-N2F relational tuples decoder (TP-N2F Decoder), which takes the output from the Reasoning Module (Sec. SECREF8) and decodes the target sequence of relational tuples via TPR unbinding. The TP-N2F Decoder utilizes an attention mechanism over the individual-word TPRs $^t$ produced by the TP-N2F Encoder. The detailed implementations are introduced below.
TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation ::: The TP-N2F natural-language Encoder
The TP-N2F encoder follows the role scheme in Sec. SECREF5 to encode each word token $w^t$ by soft-selecting one of $n_{\mathrm {F}}$ fillers and one of $n_{\mathrm {R}}$ roles. The fillers and roles are embedded as vectors. These embedding vectors, and the functions for selecting fillers and roles, are learned by two LSTMs, the Filler-LSTM and the Role-LSTM. (See Figure FIGREF11.) At each time-step $t$, the Filler-LSTM and the Role-LSTM take a learned word-token embedding $^t$ as input. The hidden state of the Filler-LSTM, $_{\mathrm {F}}^t$, is used to compute softmax scores $u_k^{\mathrm {F}}$ over $n_{\mathrm {F}}$ filler slots, and a filler vector $^{t} = ^{\mathrm {F}}$ is computed from the softmax scores (recall from Sec. SECREF2 that $$ is the learned matrix of filler vectors). Similarly, a role vector is computed from the hidden state of the Role-LSTM, $_{\mathrm {R}}^t$. $f_{\mathrm {F}}$ and $f_{\mathrm {R}}$ denote the functions that generate $^{t}$ and $^t$ from the hidden states of the two LSTMs. The token $w^t$ is encoded as $^t$, the tensor product of $^{t}$ and $^t$. $^t$ replaces the hidden vector in each LSTM and is passed to the next time step, together with the LSTM cell-state vector $^t$: see (SECREF10)–(SECREF10). After encoding the whole sequence, the TP-N2F encoder outputs the sum of all tensor products $\sum _t ^t$ to the next module. We use an MLP, called the Reasoning MLP, for TPR mapping; it takes an order-2 TPR from the encoder and maps it to the initial state of the decoder. Detailed equations and implementation are provided in Sec. SECREF22 of the Appendix. Ft = fFiller-LSTM(t,t-1, Ft-1) Rt = fRole-LSTM(t,t-1, Rt-1)
t = t t = fF(Ft) fR(Rt)
TP-N2F Model ::: The TP-N2F Model for Natural- to Formal-Language Generation ::: The TP-N2F Relational-Tuple Decoder
The TP-N2F Decoder is an RNN that takes the output from the reasoning MLP as its initial hidden state for generating a sequence of relational tuples (Figure FIGREF13). This decoder contains an attentional LSTM called the Tuple-LSTM which feeds an unbinding module: attention operates on the context vector of the encoder, consisting of all individual encoder outputs $\lbrace ^t \rbrace $. The hidden-state $$ of the Tuple-LSTM is treated as a TPR of a relational tuple and is unbound to a relation and arguments. During training, the Tuple-LSTM needs to learn a way to make $$ suitably approximate a TPR. At each time step $t$, the hidden state $^t$ of the Tuple-LSTM with attention (The version in BIBREF13) (SECREF12) is fed as input to the unbinding module, which regards $^t$ as if it were the TPR of a relational tuple with $m$ arguments possessing the role structure described in Sec. SECREF7: $^t \approx \sum _{i=1}^{m} _{i}^t \otimes _{rel}^t \otimes _i$. (In Figure FIGREF13, the assumed hypothetical form of $^t$, as well as that of $_i^t$ below, is shown in a bubble with dashed border.) To decode a binary relational tuple, the unbinding module decodes it from $^t$ using the two steps of TPR unbinding given in (SECREF7)–(SECREF7). The positional unbinding vectors $^{\prime }_{i}$ are learned during training and shared across all time steps. After the first unbinding step (SECREF7), i.e., the inner product of $^t$ with $^{\prime }_i$, we get tensors $_{i}^t$ (SECREF12). These are treated as the TPRs of two arguments $_i^t$ bound to a relation $_{rel}^t$. A relational unbinding vector $_{rel}^{\prime t}$ is computed by a linear function from the sum of the $_{i}^t$ and used to compute the inner product with each $_i^t$ to yield $_i^t$, which are treated as the embedding of argument vectors (SECREF12). Based on the TPR theory, $_{rel}^{\prime t}$ is passed to a linear function to get $_{rel}^t$ as the embedding of a relation vector. Finally, the softmax probability distribution over symbolic outputs is computed for relations and arguments separately. In generation, the most probable symbol is selected. (Detailed equations are in Appendix Sec. SECREF42) t = Atten(fTuple-LSTM(relt,arg1t,arg2t,t-1,ct-1),[0,...,n-1])
1t = t 1' 2t = t 2'
rel't = flinear(1t + 2t) 1t = 1t rel't 2t = 2t rel't
TP-N2F Model ::: Inference and The Learning Strategy of the TP-N2F Model
During inference time, natural language questions are encoded via the encoder and the Reasoning MLP maps the output of the encoder to the input of the decoder. We use greedy decoding (selecting the most likely class) to decode one relation and its arguments. The relation and argument vectors are concatenated to construct a new vector as the input for the Tuple-LSTM in the next step.
TP-N2F is trained using back-propagation BIBREF14 with the Adam optimizer BIBREF15 and teacher-forcing. At each time step, the ground-truth relational tuple is provided as the input for the next time step. As the TP-N2F decoder decodes a relational tuple at each time step, the relation token is selected only from the relation vocabulary and the argument tokens from the argument vocabulary. For an input ${\mathcal {I}}$ that generates $N$ output relational tuples, the loss is the sum of the cross entropy loss ${\mathcal {L}}$ between the true labels $L$ and predicted tokens for relations and arguments as shown in (SECREF14). LI = i=0N-1L(reli, Lreli) + i=0N-1j=12L(argji, Largji)
EXPERIMENTS
The proposed TP-N2F model is evaluated on two N2F tasks, generating operation sequences to solve math problems and generating Lisp programs. In both tasks, TP-N2F achieves state-of-the-art performance. We further analyze the behavior of the unbinding relation vectors in the proposed model. Results of each task and the analysis of the unbinding relation vectors are introduced in turn. Details of experiments and datasets are described in Sec. SECREF20 in the Appendix.
EXPERIMENTS ::: Generating operation sequences to solve math problems
Given a natural-language math problem, we need to generate a sequence of operations (operators and corresponding arguments) from a set of operators and arguments to solve the given problem. Each operation is regarded as a relational tuple by viewing the operator as relation, e.g., $(add, n1, n2)$. We test TP-N2F for this task on the MathQA dataset BIBREF16. The MathQA dataset consists of about 37k math word problems, each with a corresponding list of multi-choice options and the corresponding operation sequence. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed with the execution script from BIBREF16 to select a multi-choice answer. As there are about 30% noisy data (where the execution script returns the wrong answer when given the ground-truth program; see Sec. SECREF20 of the Appendix), we report both execution accuracy (of the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). TP-N2F is compared to a baseline provided by the seq2prog model in BIBREF16, an LSTM-based seq2seq model with attention. Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results. To verify the importance of the TP-N2F encoder and decoder, we conducted experiments to replace either the encoder with a standard LSTM (denoted LSTM2TP) or the decoder with a standard attentional LSTM (denoted TP2LSTM). We observe that both the TPR components of TP-N2F are important for achieving the observed performance gain relative to the baseline.
EXPERIMENTS ::: Generating program trees from natural-language descriptions
Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations.
EXPERIMENTS ::: Interpretation of learned structure
To interpret the structure learned by the model, we extract the trained unbinding relation vectors from the TP-N2F Decoder and reduce the dimension of vectors via Principal Component Analysis. K-means clustering results on the average vectors are presented in Figure FIGREF71 and Figure FIGREF72 (in Appendix A.6). Results show that unbinding vectors for operators or functions with similar semantics tend to be close to each other. For example, with 5 clusters in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together, and operators related to square or volume of geometry are clustered together. With 4 clusters in the AlgoLisp dataset, partial/lambda functions and sort functions are in one cluster, and string processing functions are clustered together. Note that there is no direct supervision to inform the model about the nature of the operations, and the TP-N2F decoder has induced this role structure using weak supervision signals from question/operation-sequence-answer pairs. More clustering results are presented in the Appendix A.6.
Related work
N2F tasks include many different subtasks such as symbolic reasoning or semantic parsing BIBREF19, BIBREF20, BIBREF21, BIBREF16, BIBREF17, BIBREF18. These tasks require models with strong structure-learning ability. TPR is a promising technique for encoding symbolic structural information and modeling symbolic reasoning in vector space. TPR binding has been used for encoding and exploring grammatical structural information of natural language BIBREF7, BIBREF9. TPR unbinding has also been used to generate natural language captions from images BIBREF8. Some researchers use TPRs for modeling deductive reasoning processes both on a rule-based model and deep learning models in vector space BIBREF22, BIBREF11, BIBREF12. However, none of these previous models takes advantage of combining TPR binding and TPR unbinding to learn structure representation mappings explicitly, as done in our model. Although researchers are paying increasing attention to N2F tasks, most of the proposed models either do not encode structural information explicitly or are specialized to particular tasks. Our proposed TP-N2F neural model can be applied to many tasks.
CONCLUSION AND FUTURE WORK
In this paper we propose a new scheme for neural-symbolic relational representations and a new architecture, TP-N2F, for formal-language generation from natural-language descriptions. To our knowledge, TP-N2F is the first model that combines TPR binding and TPR unbinding in the encoder-decoder fashion. TP-N2F achieves the state-of-the-art on two instances of N2F tasks, showing significant structure learning ability. The results show that both the TP-N2F encoder and the TP-N2F decoder are important for improving natural- to formal-language generation. We believe that the interpretation and symbolic structure encoding of TPRs are a promising direction for future work. We also plan to combine large-scale deep learning models such as BERT with TP-N2F to take advantage of structure learning for other generation tasks.
Appendix ::: Implementations of TP-N2F for experiments
In this section, we present details of the experiments of TP-N2F on the two datasets. We present the implementation of TP-N2F on each dataset.
The MathQA dataset consists of about 37k math word problems ((80/12/8)% training/dev/testing problems), each with a corresponding list of multi-choice options and an straight-line operation sequence program to solve the problem. An example from the dataset is presented in the Appendix A.4. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed to generate the solution for the given math problem. We use the execution script from BIBREF16 to execute the generated operation sequence and compute the multi-choice accuracy for each problem. During our experiments we observed that there are about 30% noisy examples (on which the execution script fails to get the correct answer on the ground truth program). Therefore, we report both execution accuracy (the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly).
The AlgoLisp dataset BIBREF17 is a program synthesis dataset, which has 79k/9k/10k training/dev/testing samples. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of commands from leaves to root and (as in MathQA) use the symbol $\#_i$ to indicate the result of the $i^{\mathrm {th}}$ command (generated previously by the model). A dataset sample with our parsed command sequence is presented in the Appendix A.4. AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: accuracy of passing all test cases (Acc), accuracy of passing 50% of test cases (50p-Acc), and accuracy of generating an exactly matched program (M-Acc). AlgoLisp has about 10% noise data (where the execution script fails to pass all test cases on the ground truth program), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed).
We use $d_{\mathrm {R}}, n_{\mathrm {R}}, d_{\mathrm {F}}, n_{\mathrm {F}}$ to indicate the TP-N2F encoder hyperparameters, the dimension of role vectors, the number of roles, the dimension of filler vectors and the number of fillers. $d_{Rel}, d_{Arg},d_{Pos}$ indicate the TP-N2F decoder hyper-parameters, the dimension of relation vectors, the dimension of argument vectors, and the dimension of position vectors.
In the experiment on the MathQA dataset, we use $n_{\mathrm {F}}= 150$, $n_{\mathrm {R}}= 50$, $d_{\mathrm {F}}= 30$, $d_{\mathrm {R}}= 20$, $d_{Rel} = 20$, $d_{Arg} = 10$, $d_{Pos} = 5$ and we train the model for 60 epochs with learning rate 0.00115. The reasoning module only contains one layer. As most of the math operators in this dataset are binary, we replace all operators taking three arguments with a set of binary operators based on hand-encoded rules, and for all operators taking one argument, a padding symbol is appended. For the baseline SEQ2PROG-orig, TP2LSTM and LSTM2TP, we use hidden size 100, single-direction, one-layer LSTM. For the SEQ2PROG-best, we performed a hyperparameter search on the hidden size for both encoder and decoder; the best score is reported.
In the experiment on the AlgoLisp dataset, we use $n_{\mathrm {F}}= 150$, $n_{\mathrm {R}}= 50$, $d_{\mathrm {F}}= 30$, $d_{\mathrm {R}}= 30$, $d_{Rel} = 30$, $d_{Arg} = 20$, $d_{Pos} = 5$ and we train the model for 50 epochs with learning rate 0.00115. We also use one-layer in the reasoning module like in MathQA. For this dataset, most function calls take three arguments so we simply add padding symbols for those functions with fewer than three arguments.
Appendix ::: Detailed equations of TP-N2F ::: TP-N2F encoder
Filler-LSTM in TP-N2F encoder
This is a standard LSTM, governed by the equations:
$\varphi , \tanh $ are the logistic sigmoid and tanh functions applied elementwise. $\flat $ flattens (reshapes) a matrix in $^{d_{\mathrm {F}} \times d_{\mathrm {R}}}$ into a vector in $^{d_{\mathrm {T}}}$, where $d_{\mathrm {T}} = d_{\mathrm {F}} d_{\mathrm {R}}$. $\odot $ is elementwise multiplication. The variables have the following dimensions: ft, ft, ft, ft, ft, ft, ff, fg, fi, fo, ♭(t-1) RdT
wt Rd
ff, fg, fi, fo RdT d
ff, fg, fi, fo RdT dT
Filler vector
The filler vector for input token $w^t$ is $^t$, defined through an attention vector over possible fillers, $_{\mathrm {f}}^t$:
($W_{\mathrm {f}}$ is the same as $$ of Sec. SECREF2.) The variables' dimensions are: fa RnF dT
ft RnF
f RdF nF
t RdF $T$ is the temperature factor, which is fixed at 0.1.
Role-LSTM in TP-N2F encoder
Similar to the Filler-LSTM, the Role-LSTM is also a standard LSTM, governed by the equations:
The variable dimensions are: rt, rt, rt, rt, rt, rt, rf, rg, ri, ro, ♭(t-1) RdT
wt Rd
rf, rg, ri, ro RdT d
rf, rg, ri, ro RdT dT
Role vector
The role vector for input token $w^t$ is determined analogously to its filler vector:
The dimensions are: ra RnR dT
rt RnR
r RdR nR
t RdR
Binding
The TPR for the filler/role binding for token $w^t$ is then:
where t RdR dF
Appendix ::: Detailed equations of TP-N2F ::: Structure Mapping
$^0 \in \mathbb {R}^{d_{\mathrm {H}}}$, where $d_{\mathrm {H}} = d_{\mathrm {A}}, d_{\mathrm {O}}, d_{\mathrm {P}}$ are dimension of argument vector, operator vector and position vector. $f_{\mathrm {mapping}}$ is implemented with a MLP (linear layer followed by a tanh) for mapping the $_t \in \mathbb {R}^{d_{\mathrm {T}}}$ to the initial state of decoder $^0$.
Appendix ::: Detailed equations of TP-N2F ::: TP-N2F decoder
Tuple-LSTM
The output tuples are also generated via a standard LSTM:
Here, $\gamma $ is the concatenation function. $_{Rel}^{t-1}$ is the trained embedding vector for the Relation of the input binary tuple, $_{Arg1}^{t-1}$ is the embedding vector for the first argument and $_{Arg2}^{t-1}$ is the embedding vector for the second argument. Then the input for the Tuple LSTM is the concatenation of the embedding vectors of relation and arguments, with dimension $d_{\mathrm {dec}}$. t, t, t, t, t, inputt, f, g, i, o, ♭(t-1) RdH
dt Rddec
f, g, i, o RdH ddec
f, g, i, o RdH dH
t RdH ${\mathrm {Atten}}$ is the attention mechanism used in BIBREF13, which computes the dot product between $_{\mathrm {input}}^t$ and each $_{t^{\prime }}$. Then a linear function is used on the concatenation of $_{\mathrm {input}}^t$ and the softmax scores on all dot products to generate $^t$. The following equations show the attention mechanism:
${\mathrm {score}}$ is the score function of the attention. In this paper, the score function is dot product. T RdH n
t Rn
t RdH
RdH (dT+n)
Unbinding
At each timestep $t$, the 2-step unbinding process described in Sec. SECREF7 operates first on an encoding of the triple as a whole, $$, using two unbinding vectors $_i^{\prime }$ that are learned but fixed for all tuples. This first unbinding gives an encoding of the two operator-argument bindings, $_i$. The second unbinding operates on the $_i$, using a generated unbinding vector for the operator, $_{rel}^{\prime }$, giving encodings of the arguments, $_i$. The generated unbinding vector for the operator, $^{\prime }$, and the generated encodings of the arguments, $_i$, each produce a probability distribution over symbolic operator outputs $Rel$ and symbolic argument outputs $Arg_i$; these probabilities are used in the cross-entropy loss function. For generating a single symbolic output, the most-probable symbols are selected.
The dimensions are: rel't RdO
1t, 2t RdA
'1, '2 RdP
1t, 2t RdA dO
dual RdH
rt RnO dO
at RnA dA
rt RnR
a1t, a2t RnA
Appendix ::: The tensor that is input to the decoder's Unbinding Module is a TPR
Here we show that, if learning is successful, the order-3 tensor $$ that each iteration of the decoder's Tuple LSTM feeds to the decoder's Unbinding Module (Figure FIGREF13) will be a TPR of the form assumed in Eq. SECREF7, repeated here: = j j rel j. The operations performed by the decoder are given in Eqs. SECREF7–SECREF7, and Eqs. SECREF12–SECREF12, rewritten here: i' = i
i rel' = i This is the standard TPR unbinding operation, used recursively: first with the unbinding vectors for positions, $_i^{\prime }$, then with the unbinding vector for the operator, $_{rel}^{\prime }$. It therefore suffices to analyze a single unbinding; the result can then be used recursively. This in effect reduces the problem to the order-2 case. What we will show is: given a set of unbinding vectors $\lbrace _i^{\prime } \rbrace $ which are dual to a set of role vectors $\lbrace _i \rbrace $, with $i$ ranging over some index set $I$, if $$ is an order-2 tensor such that 'i = i, i I then = i I i i + TPR + for some tensor $$ that annihilates all the unbinding vectors: 'i = 0, i I. If learning is successful, the processing in the decoder will generate the target relational tuple $(R, A_1, A_2)$ by obeying Eq. SECREF65 in the first unbinding, where we have $_i^{\prime } = _i^{\prime }, _i = _i, I = \lbrace 1, 2\rbrace $, and obeying Eq. SECREF65 in the second unbinding, where we have $_i^{\prime } = _{rel}^{\prime }, _i^{\prime } = _i$, with $I =$ the set containing only the null index.
Treat rank-2 tensors as matrices; then unbinding is simply matrix-vector multiplication. Assume the set of unbinding vectors is linearly independent (otherwise there would in general be no way to satisfy Eq. SECREF65 exactly, contrary to assumption). Then expand the set of unbinding vectors, if necessary, into a basis $\lbrace ^{\prime }_k\rbrace _{k \in K \supseteq I}$. Find the dual basis, with $_k$ dual to $^{\prime }_k$ (so that $_l^\top _j^{\prime } = \delta _{lj}$). Because $\lbrace ^{\prime }_k\rbrace _{k \in K}$ is a basis, so is $\lbrace _k\rbrace _{k \in K}$, so any matrix $$ can be expanded as $= \sum _{k \in K} _k _k^{\top }$. Since $^{\prime }_i = _i, \forall i \in I$ are the unbinding conditions (Eq. SECREF65), we must have $_i = _i, i \in I$. Let $_{{\mathrm {TPR}}} \equiv \sum _{i \in I} _i _i^{\top }$. This is the desired TPR, with fillers $_i$ bound to the role vectors $_i$ which are the duals of the unbinding vectors $_i^{\prime }$ ($i \in I$). Then we have $= _{{\mathrm {TPR}}} + $ (Eq. SECREF65) where $\equiv \sum _{j \in K, j \notin I} _j _j^{\top }$; so $_i^{\prime } = {\mathbf {0}}, i \in I$ (Eq. SECREF65). Thus, if training is successful, the model must have learned how to feed the decoder with order-3 TPRs with the structure posited in Eq. SECREF65.
The argument so far addresses the case where the unbinding vectors are linearly independent, making it possible to satisfy Eq. SECREF65 exactly. In relatively high-dimensional vector spaces, it will often happen that even when the number of unbinding vectors exceeds the dimension of their space by a factor of 2 or 3 (which applies to the TP-N2F models presented here), there is a set of role vectors $\lbrace _k \rbrace _{k \in K}$ approximately dual to $\lbrace ^{\prime }_k \rbrace _{k \in K}$, such that $_l^\top _j^{\prime } = \delta _{lj} \hspace{2.84526pt}\forall l, j \in K$ holds to a good approximation. (If the distribution of normalized unbinding vectors is approximately uniform on the unit sphere, then choosing the approximate dual vectors to equal the unbinding vectors themselves will do, since they will be nearly orthonormal BIBREF10. If the $\lbrace ^{\prime }_k \rbrace _{k \in K}$ are not normalized, we just rescale the role vectors, choosing $_k = _k^{\prime } / \Vert _k^{\prime } \Vert ^2$.) When the number of such role vectors exceeds the dimension of the embedding space, they will be overcomplete, so while it is still true that any matrix $$ can be expanded as above ($= \sum _{k \in K} _k _k^{\top }$), this expansion will no longer be unique. So while it remains true that $$ a TPR, it is no longer uniquely decomposable into filler/role pairs. The claim above does not claim uniqueness in this sense, and remains true.)
Appendix ::: Dataset samples ::: Data sample from MathQA dataset
Problem: The present polulation of a town is 3888. Population increase rate is 20%. Find the population of town after 1 year?
Options: a) 2500, b) 2100, c) 3500, d) 3600, e) 2700
Operations: multiply(n0,n1), divide(#0,const-100), add(n0,#1)
Appendix ::: Dataset samples ::: Data sample from AlgoLisp dataset
Problem: Consider an array of numbers and a number, decrements each element in the given array by the given number, what is the given array?
Program Nested List: (map a (partial1 b –))
Command-Sequence: (partial1 b –), (map a #0)
Appendix ::: Generated programs comparison
In this section, we display some generated samples from the two datasets, where the TP-N2F model generates correct programs but LSTM-Seq2Seq does not.
Question: A train running at the speed of 50 km per hour crosses a post in 4 seconds. What is the length of the train?
TP-N2F(correct):
(multiply,n0,const1000) (divide,#0,const3600) (multiply,n1,#1)
LSTM(wrong):
(multiply,n0,const0.2778) (multiply,n1,#0)
Question: 20 is subtracted from 60 percent of a number, the result is 88. Find the number?
TP-N2F(correct):
(add,n0,n2) (divide,n1,const100) (divide,#0,#1)
LSTM(wrong):
(add,n0,n2) (divide,n1,const100) (divide,#0,#1) (multiply,#2,n3) (subtract,#3,n0)
Question: The population of a village is 14300. It increases annually at the rate of 15 percent. What will be its population after 2 years?
TP-N2F(correct):
(divide,n1,const100) (add,#0,const1) (power,#1,n2) (multiply,n0,#2)
LSTM(wrong):
(multiply,const4,const100) (sqrt,#0)
Question: There are two groups of students in the sixth grade. There are 45 students in group a, and 55 students in group b. If, on a particular day, 20 percent of the students in group a forget their homework, and 40 percent of the students in group b forget their homework, then what percentage of the sixth graders forgot their homework?
TP-N2F(correct):
(add,n0,n1) (multiply,n0,n2) (multiply,n1,n3) (divide,#1,const100) (divide,#2,const100) (add,#3,#4) (divide,#5,#0) (multiply,#6,const100)
LSTM(wrong):
(multiply,n0,n1) (subtract,n0,n1) (divide,#0,#1)
Question: 1 divided by 0.05 is equal to
TP-N2F(correct):
(divide,n0,n1)
LSTM(wrong):
(divide,n0,n1) (multiply,n2,#0)
Question: Consider a number a, compute factorial of a
TP-N2F(correct):
( <=,arg1,1 ) ( -,arg1,1 ) ( self,#1 ) ( *,#2,arg1 ) ( if,#0,1,#3 ) ( lambda1,#4 ) ( invoke1,#5,a )
LSTM(wrong):
( <=,arg1,1 ) ( -,arg1,1 ) ( self,#1 ) ( *,#2,arg1 ) ( if,#0,1,#3 ) ( lambda1,#4 ) ( len,a ) ( invoke1,#5,#6 )
Question: Given an array of numbers and numbers b and c, add c to elements of the product of elements of the given array and b, what is the product of elements of the given array and b?
TP-N2F(correct):
( partial, b,* ) ( partial1,c,+ ) ( map,a,#0 ) ( map,#2,#1 )
LSTM(wrong):
( partial1,b,+ ) ( partial1,c,+ ) ( map,a,#0 ) ( map,#2,#1 )
Question: You are given an array of numbers a and numbers b , c and d , let how many times you can replace the median in a with sum of its digits before it becomes a single digit number and b be the coordinates of one end and c and d be the coordinates of another end of segment e , your task is to find the length of segment e rounded down
TP-N2F(correct):
( digits arg1 ) ( len #0 ) ( == #1 1 ) ( digits arg1 ) ( reduce #3 0 + ) ( self #4 ) ( + 1 #5 ) ( if #2 0 #6 ) ( lambda1 #7 ) ( sort a ) ( len a ) ( / #10 2 ) ( deref #9 #11 ) ( invoke1 #8 #12 ) ( - #13 c ) ( digits arg1 ) ( len #15 ) ( == #16 1 ) ( digits arg1 ) ( reduce #18 0 + ) ( self #19 ) ( + 1 #20 ) ( if #17 0 #21 ) ( lambda1 #22 ) ( sort a ) ( len a ) ( / #25 2 ) ( deref #24 #26 ) ( invoke1 #23 #27 ) ( - #28 c ) ( * #14 #29 ) ( - b d ) ( - b d ) ( * #31 #32 ) ( + #30 #33 ) ( sqrt #34 ) ( floor #35 )
LSTM(wrong): ( digits arg1 ) ( len #0 ) ( == #1 1 ) ( digits arg1 ) ( reduce #3 0 + ) ( self #4 ) ( + 1 #5 ) ( if #2 0 #6 ) ( lambda1 #7 ) ( sort a ) ( len a ) ( / #10 2 ) ( deref #9 #11 ) ( invoke1 #8 #12 c ) ( - #13 ) ( - b d ) ( - b d ) ( * #15 #16 ) ( * #14 #17 ) ( + #18 ) ( sqrt #19 ) ( floor #20 )
Question: Given numbers a , b , c and e , let d be c , reverse digits in d , let a and the number in the range from 1 to b inclusive that has the maximum value when its digits are reversed be the coordinates of one end and d and e be the coordinates of another end of segment f , find the length of segment f squared
TP-N2F(correct):
( digits c ) ( reverse #0 ) ( * arg1 10 ) ( + #2 arg2 ) ( lambda2 #3 ) ( reduce #1 0 #4 ) ( - a #5 ) ( digits c ) ( reverse #7 ) ( * arg1 10 ) ( + #9 arg2 ) ( lambda2 #10 ) ( reduce #8 0 #11 ) ( - a #12 ) ( * #6 #13 ) ( + b 1 ) ( range 0 #15 ) ( digits arg1 ) ( reverse #17 ) ( * arg1 10 ) ( + #19 arg2 ) ( lambda2 #20 ) ( reduce #18 0 #21 ) ( digits arg2 ) ( reverse #23 ) ( * arg1 10 ) ( + #25 arg2 ) ( lambda2 #26 ) ( reduce #24 0 #27 ) ( > #22 #28 ) ( if #29 arg1 arg2 ) ( lambda2 #30 ) ( reduce #16 0 #31 ) ( - #32 e ) ( + b 1 ) ( range 0 #34 ) ( digits arg1 ) ( reverse #36 ) ( * arg1 10 ) ( + #38 arg2 ) ( lambda2 #39 ) ( reduce #37 0 #40 ) ( digits arg2 ) ( reverse #42 ) ( * arg1 10 ) ( + #44 arg2 ) ( lambda2 #45 ) ( reduce #43 0 #46 ) ( > #41 #47 ) ( if #48 arg1 arg2 ) ( lambda2 #49 ) ( reduce #35 0 #50 ) ( - #51 e ) ( * #33 #52 ) ( + #14 #53 )
LSTM(wrong):
( - a d ) ( - a d ) ( * #0 #1 ) ( digits c ) ( reverse #3 ) ( * arg1 10 ) ( + #5 arg2 ) ( lambda2 #6 ) ( reduce #4 0 #7 ) ( - #8 e ) ( + b 1 ) ( range 0 #10 ) ( digits arg1 ) ( reverse #12 ) ( * arg1 10 ) ( + #14 arg2 ) ( lambda2 #15 ) ( reduce #13 0 #16 ) ( digits arg2 ) ( reverse #18 ) ( * arg1 10 ) ( + #20 arg2 ) ( lambda2 #21 ) ( reduce #19 0 #22 ) ( > #17 #23 ) ( if #24 arg1 arg2 ) ( lambda2 #25 ) ( reduce #11 0 #26 ) ( - #27 e ) ( * #9 #28 ) ( + #2 #29 )
Appendix ::: Unbinding relation vector clustering
We run K-means clustering on both datasets with $k = 3,4,5,6$ clusters and the results are displayed in Figure FIGREF71 and Figure FIGREF72. As described before, unbinding-vectors for operators or functions with similar semantics tend to be closer to each other. For example, in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together at middle, and operators related to geometry such as square or volume are clustered together at bottom left. In AlgoLisp dataset, basic arithmetic functions are clustered at middle, and string processing functions are clustered at right. | Operation accuracy: 71.89
Execution accuracy: 55.95 |
a3a871ca2417b2ada9df1438d282c45e4b4ad668 | a3a871ca2417b2ada9df1438d282c45e4b4ad668_0 | Q: How do previous methods perform on the Switchboard Dialogue Act and DailyDialog datasets?
Text: Introduction
Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context.
Many approaches have been proposed for DA recognition. Previous work relies heavily on handcrafted features which are domain-specific and difficult to scale up BIBREF4, BIBREF5, BIBREF6. Recently, with great ability to do feature extraction, deep learning has yielded state-of-the-art results for many NLP tasks, and also makes impressive advances in DA recognition. BIBREF7, BIBREF8 built hierarchical CNN/RNN models to encode sentence and incorporate context information for DA recognition. BIBREF9 achieved promising performance by adding the CRF to enhance the dependency between labels. BIBREF10 applied the self-attention mechanism coupled with a hierarchical recurrent neural network.
However, previous approaches cannot make full use of the relative position relationship between utterances. It is natural that utterances in the local context always have strong dependencies in our daily dialog. In this paper, we propose a hierarchical model based on self-attention BIBREF11 and revise the attention distribution to focus on a local and contextual semantic information by a learnable Gaussian bias which represents the relative position information between utterances, inspired by BIBREF12. Further, to analyze the effect of dialog length quantitatively, we introduce a new dialog segmentation mechanism for the DA task and evaluate the performance of different dialogue length and context padding length under online and offline settings. Experiment and visualization show that our method can learn the local contextual dependency between utterances explicitly and achieve promising performance in two well-known datasets.
The contributions of this paper are:
We design a hierarchical model based on self-attention and revise the attention distribution to focus on a local and contextual semantic information by the relative position information between utterances.
We introduce a new dialog segmentation mechaism for the DA task and analyze the effect of dialog length and context padding length.
In addition to traditional offline prediction, we also analyze the accuracy and time complexity under the online setting.
Background ::: Related Work
DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Early studies rely heavily on handcrafted features such as lexical, syntactic, contextual, prosodic and speaker information and achieve good results BIBREF13, BIBREF4, BIBREF16.
Recent studies have applied deep learning based model for DA recognition. BIBREF14 proposed a model based on RNNs and CNNs that incorporates preceding short texts to classify current DAs. BIBREF7, BIBREF8 used hierarchical CNN and RNN to model the utterance sequence in the conversation, which can extract high-level sentence information to predict its label. They found that there is a small performance difference among different hierarchical CNN and RNN approaches. BIBREF9 added a CRF layer on the top of the hierarchical network to model the label transition dependency. BIBREF10 applied the context-aware self-attention mechanism coupled with a hierarchical recurrent neural network and got a significant improvement over state-of-the-art results on SwDA datasets. On another aspect, BIBREF17 combined a recurrent neural network language model with a latent variable model over DAs. BIBREF18 proposed a Discrete Information Variational Autoencoders (DI-VAE) model to learn discrete latent actions to incorporate sentence-level distributional semantics for dialogue generation.
Background ::: Self-Attention
Self-attention BIBREF11 achieves great success for its efficiently parallel computation and long-range dependency modeling.
Given the input sequence $ s = \left( s_1,...,s_n \right) $ of n elements where $ s_i \in \mathbb {R}^{d_s} $. Each attention head holds three parameter matrices, $W_h^Q, W_h^K, W_h^V \in {\mathbb {R}}^{d_s \times d_z} $ where $ h $ present the index of head. For the head $h$, linear projection is applied to the sequence $s$ to obtain key (K), query (Q), and value (V) representations. the attention module gets the weight by computing dot-products between key/query pair and then $softmax$ normalizes the result. it is defined as:
where $\sqrt{d_z}$ is the scaling factor to counteract this effect that the dot products may grow large in magnitude. For all the heads,
where $W^O \in \mathbb {R}^{(d_z*h)\times d_s}$ is the output projection.
One weakness of self-attention model it that they cannot encode the position information efficiently. Some methods have been proposed to encode the relative or absolute position of tokens in the sequence as the additional input to the model. BIBREF11 used sine and cosine functions of different frequencies and added positional encodings to the input embeddings together. It used absolute position embedding to capture relative positional relation by the characteristic of sine and cosine functions. Moreover, several studies show that explicitly modeling relative position can further improve performance. For example, BIBREF19 proposed relative position encoding to explicitly model relative position by independent semantic parameter. It demonstrated significant improvements even when entirely replacing conventional absolute position encodings. BIBREF12 proposed to model localness for the self-attention network by a learnable Gaussian bias which enhanced the ability to model local relationship and demonstrated the effectiveness on the translation task.
In our study, we design a local contextual attention model, which incorporates relative position information by a learnable Gaussian bias into original attention distribution. Different from BIBREF12, in our method, the distribution center is regulated around the corresponding utterance with a window, which indicates the context dependency preference, for capturing more local contextual dependency.
Methodology
Before we describe the proposed model in detail, we first define the mathematical notation for the DA recognition task in this paper. Given the dataset, $X = (D_1,D_2,... D_L)$ with corresponding DA labels $(Y_1,Y_2,...Y_L)$. Each dialogue is a sequence of $ N_l $ utterances $ D_l = (u_1,u_2,...u_{N_l})$ with $ Y_l = (y_1,y_2,...y_{N_l}) $. Each utterance is padded or truncated to the length of $ M $ words, $u_j = (w_1,w_2,...w_{M})$.
Figure FIGREF6 shows our overall model structure. For the first layer, we encode each utterance $u_j$ into a vector representation. Each word $w_m$ of the utterance $u_j$ is converted into dense vector representations $e_m$ from one-hot token representation. And then, we apply LSTM BIBREF20, a powerful and effective structure for sequence modeling, to encode the word sequence. Formally, for the utterance $u_j$:
where $embed$ represents the embedding layer which can be initialized by pre-trained embeddings. To make a fair comparison with previous work, we do not use the fine-grained embedding presented in BIBREF21. LSTM helps us get the context-aware sentence representation for the input sequence. There are several approaches to represent the sentence from the words. Following BIBREF22, we add a max-pooling layer after LSTM, which selects the maximum value in each dimension from the hidden units. In our experiment, LSTM with max-pooling does perform a little better than LSTM with last-pooling, which is used in BIBREF9.
Afterwards, we get the utterances vector representations $ u = (u_1,...,u_{N_l}) $ of $N_l$ elements for the dialogue $D_l$ where $ u_j \in \mathbb {R}^{d_s}, d_s$ is the dimension of hidden units. As we discussed in section SECREF7, given the sequence $ s \in \mathbb {R}^{N_l*d_s}$, self-attention mechanism calculates the attention weights between each pair of utterances in the sequence and get the weighted sum as output. The attention module explicitly models the context dependency between utterances. We employ a residual connection BIBREF23 around the attention module, which represents the dependency encoder between utterances, and the current utterance encoder $s$:
Finally, we apply a two-layer fully connected network with a Rectified Linear Unit (ReLU) to get the final classification output for each utterance.
Methodology ::: Modeling Local Contextual Attention
The attention explicitly models the interaction between the utterances. However, for context modeling, original attention mechanism always considers all of the utterances in a dialogue which inhibits the relation among the local context and is prone to overfitting during training. It is natural that utterances in the local context always have strong dependencies in our daily dialog. Therefore, we add a learnable Gaussian bias with the local constraint to the weight normalized by $softmax$ to enhance the interaction between concerned utterances and its neighbors.
The attention module formula is revised as:
The first term is the original dot product self-attention model. $POS \in \mathbb {R}^{N\times N}$ is the bias matrix, where N is the length of dialogue. The element $POS_{i,j}$ is defined following by gaussian distribution:
$POS_{i,j}$ measures the dependency between the utterance $u_j$ and the utterance $u_i$ in terms of the relative position prior. $w_{i}$ represents for the standard deviation, which controls the weight decaying. Because of local constraint, $|c_{i} - i| <= C$, for each utterance $u_i$, the predicted center position $c_{i}$ and window size $ w_{i}$ is defined as followed:
where $W_i^c,W_i^d \in \mathbb {R}^{1*N}$ are both learnable parameters. We initialized the parameter $W_i^c$ to 0, which leads to center position $ c_i = i $ by default. Furthermore, $c_{i}$ and $w_{i}$ are both related to the semantic context of the utterances, so we assign the mean of key $\overline{K}$ in attention mechanism to represent the context information. Moreover, the central position also indicates the dependency preference of the preceding utterances or subsequent utterances.
It is worth noting that there is a little difference with BIBREF12, although we both revise the attention module by the Gaussian distribution. In our method, for the given utterance $u_{i}$, the distribution center $c_{i}$ is regulated for capturing the not only local but also contextual dependency, which can be formally expressed as: $c_{i} \in (i-C,i+C)$. However, in their work, the distribution center can be anywhere in the sequence, and it is designed for capturing the phrasal patterns, which are essential for Neural Machine Translation task.
Methodology ::: Online and Offline Predictions
Previous work mainly focuses on the offline setting where we can access the whole utterances in the dialogue and predict all the DA labels simultaneously. However, the online setting is the natural demand in our real-time applications. For the online setting, we only care about the recognition result of the last utterance in the given context, as seen in the area with the red dashed line in Figure FIGREF6, our model is well compatible with online setting, we can calculate the attention between the last utterance and the other utterances directly where $K \in \mathbb {R}^{1\times d}, Q \in \mathbb {R}^{n\times d}, V \in \mathbb {R}^{n\times d}$. For LSTM, we still have to model the entire sequence, which is slower than attention based models. Table TABREF17 shows the time complexity comparison excluding the time cost of first layer encoding, and the dialogue length $n$ is smaller than the representation dimension $d$. Our model is easy to expand into the online setting, however, to have a fair comparison with previous work, in our experiments, we applied the models under the offline setting by default.
Methodology ::: Separate into Sub-dialogues
The length of different dialogues in the dataset varies a lot. It is worth noting that the length of dialog affects the model prediction. On the one hand, under the offline setting, we can access the whole utterances in the dialogue and predict all the DA labels simultaneously, so the more utterances, the more efficient. However, on the other hand, if we put too many utterances in once prediction, it will model too much unrelated dependency in the long utterances sequence for both LSTM and attention mechanism based model. The sub-dialogues with the same length also enable efficiently batch training. To study how the dialogue length and context padding length will affect the performance, so we defined a sliding window $W$ which is the sub-dialogue length. Then, we separate each long dialogue into several small sub-dialogues. For example, the dialog $D$ is a sequence of utterances with length $n$, and we will get $\lceil x/w \rceil $ sub-dialogues, for the k-th sub-dialogues, the utterances sequence is $(u_{(k-1)*W+1},u_{(k-1)*W+2},...,u_{k*W})$. In order to avoid losing some context information caused by being separated, which will affect the context modeling for the utterances in the begin and end of the sub-dialog, we add the corresponding context with $P$ (stands for context padding) utterances at the begin and the end of each sliding window, so for the k-th sub-dialogues, the revised utterances sequence is $(u_{(k-1)*W-P+1},u_{(k-1)*W-P+2},...,u_{k*W+P})$. Moreover, we mask the loss for the context padding utterances, which can be formally expressed as:
$M(i)=0$ if utterance $i$ is in the context padding otherwise 1, $L$ is the cross entropy.
The $W$ and $P$ are both hyperparameters; in the experiment SECREF21, we will talk about the effect of the window size and the context padding length.
Experiments ::: Datasets
We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwDA has been widely used in previous work for the DA recognition task. It is annotated on 1155 human to human telephonic conversations about the given topic. Each utterance in the conversation is manually labeled as one of 42 dialogue acts according to SWBD-DAMSL taxonomy BIBREF25. In BIBREF10, they used 43 categories of dialogue acts, which is different from us and previous work. The difference in the number of labels is mainly due to the special label “+”, which represents that the utterance is interrupted by the other speaker (and thus split into two or more parts). We used the same processing with BIBREF26, which concatenated the parts of an interrupted utterance together, giving the result the tag of the first part and putting it in its place in the conversation sequence. It is critical for fair comparison because there are nearly 8% data has the label “+”. Lacking standard splits, we followed the training/validation/test splits by BIBREF14. DailyDialog dataset contains 13118 multi-turn dialogues, which mainly reflect our daily communication style. It covers various topics about our daily life. Each utterance in the conversation is manually labeled as one out of 4 dialogue act classes. Table TABREF18 presents the statistics for both datasets. In our preprocessing, the text was lowercased before tokenized, and then sentences were tokenized by WordPiece tokenizer BIBREF27 with a 30,000 token vocabulary to alleviate the Out-of-Vocabulary problem.
[1]The author claimed that they achieved 78.7%(81.3%) accuracy with pre-trained word embedding (fine-grained embedding). For a fair comparison, both previous and our work is simply based on pre-trained word embedding. [2]The author randomly selected two test sets which are different from previous and our work and achieved 77.15% and 79.74%, and we reimplemented in standard test sets.
Experiments ::: Results on SwDA
In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. It is worth noting that BIBREF10 combined GloVeBIBREF28 and pre-trained ELMo representationsBIBREF29 as word embeddings. However, in our work, we only applied the pre-trained word embedding. To illustrate the importance of context information, we also evaluate several sentence classification methods (CNN, LSTM, BERT) as baselines. For baseline models, both CNN and LSTM, got similar accuracy (75.27% and 75.59% respectively). We also fine-tuned BERT BIBREF30 to do recognition based on single utterance. As seen, with the powerful unsupervised pre-trained language model, BERT (76.88% accuracy) outperformed LSTM and CNN models for single sentence classification. However, it was still much lower than the models based on context information. It indicates that context information is crucial in the DA recognition task. BERT can boost performance in a large margin. However, it costs too much time and resources. In this reason, we chose LSTM as our utterance encoder in further experiment.
By modeling context information, the performance of the hierarchical model is improved by at least 3%, even compared to BERT. In order to better analyze the semantic dependency learned by attention, in our experiments, we removed the CRF module. In terms of different hierarchical models, our LSTM+BLSTM achieved good result. The accuracy was 80.00% which is even a little better than Hierarchical BLSTM-CRF BIBREF9. Relying on attention mechanism and local contextual modeling, our model, LSTM+Attention and LSTM+Local Contextual Attention, achieved 80.12% and 80.34% accuracy respectively. Compared with the previous best approach Hierarchical BLSTM-CRF, we can obtain a relative accuracy gain with 1.1% by our best model. It indicated that self-attention model can capture context dependency better than the BLSTM model. With adding the local constraint, we can get an even better result.
To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result. It is worth noting that it is actually the same as single sentence classification when $P = 0$ (without any context provided). First, we set $W$ to 1 to discuss how the length of context padding will affect. As seen in the result, the accuracy increased when more context padding was used for both LSTM+BLSTM and LSTM+Attention approaches, so we did not evaluate the performance of LSTM+LC Attention when context padding is small. There was no further accuracy improvement when the length of context padding was beyond 5. Therefore, we fixed the context padding length $P$ to 5 and increased the size of the sliding window to see how it works. With sliding window size increasing, the more context was involved together with more unnecessary information. From the experiments, we can see that both LSTM+BLSTM and LSTM+Attention achieved the best performance when window size was 1 and context padding length was 5. When window size increased, the performances of these two models dropped. However, our model (LSTM+LC Attention) can leverage the context information more efficiently, which achieved the best performance when window size was 10, and the model was more stable and robust to the different setting of window size.
For online prediction, we only care about the recognition result of the last utterance in the given context. We added 5 preceding utterances as context padding for every predicted utterance because we cannot access subsequent utterances in the online setting. As seen in Table TABREF22, without subsequent utterances, the performances of these three models dropped. However, LSTM+LC Attention still outperformed the other two models.
Experiments ::: Result on DailyDialog
The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models. From table TABREF18 we can see that, the average dialogue length $|U|$ in DailyDialog is much shorter than the average length of SwDA. So, in our experiment, we set the maximum of the $W$ to 10, which almost covers the whole utterances in the dialogue. Using the same way as SwDA dataset, we, first, set W to 1 and increased the length of context padding. As seen, modeling local context information, hierarchical models yielded significant improvement than sentence classification. There was no further accuracy improvement when the length of context padding was beyond 2, so we fixed the context padding length P to 2 and increased the size of sliding window size W. From the experiments, we can see that LSTM+Attention always got a little better accuracy than LSTM+BLSTM. With window size increasing, the performances of these two models dropped. Relying on modeling local contextual information, LSTM+LC Attention achieved the best accuracy (85.81%) when the window size was 5. For the longer sliding window, the performance of LSTM+LC Attention was still better and more robust than the other two models. For online prediction, we added 2 preceding utterances as context padding, and the experiment shows that LSTM+LC Attention outperformed the other two models under the online setting, although the performances of these three models dropped without subsequent utterances.
Experiments ::: Visualization
In this section, we visualize the attention weights for analyzing how local contextual attention works in detail. Figure FIGREF24 shows the visualization of original attention and local contextual attention for the example dialogue shown in Table TABREF1. The attention matrix $M$ explicitly measures the dependency among utterances. Each row of grids is normalized by $softmax$, $M_{ij}$ represents for the dependency score between the utterance i and utterance j. As demonstrated in Figure FIGREF24, there are some wrong and uninterpretable attention weights annotated with red color, which is learned by the original attention. The original attention model gives the utterance “B: Hi” (position 0) and “A: Okay.” (position 7) a high dependency score. However, local contextual attention weakens its attention weights due to the long distance apart.
Overall, the additional Gaussian bias trend to centralize the attention distribution to the diagonal of the matrix, which is in line with our linguistic intuition that utterances that are far apart usually don't have too strong dependencies. As demonstrated in Figure FIGREF24, benefiting of the additional Gaussian bias, the revised attention mechanism weakens the attention weights between utterances which cross the long relative distance. For the grids near diagonal, it strengthens their dependency score and doesn't bring other useless dependencies for its learnable magnitude.
Conclusions and Future Work
In the paper, we propose our hierarchical model with local contextual attention to the Dialogue Act Recognition task. Our model can explicitly capture the semantic dependencies between utterances inside the dialogue. To enhance our model with local contextual information, we revise the attention distribution by a learnable Gaussian bias to make it focus on the local neighbors. Based on our dialog segmentation mechanism, we find that local contextual attention reduces the noises through relative position information, which is essential for dialogue act recognition. And this segmentation mechanism can be applied under online and offline settings. Our model achieves promising performance in two well-known datasets, which shows that modeling local contextual information is crucial for dialogue act recognition.
There is a close relation between dialogue act recognition and discourse parsing BIBREF31. The most discourse parsing process is composed of two stages: structure construction and dependency labeling BIBREF32, BIBREF33. For future work, a promising direction is to apply our method to multi-task training with two stages jointly. Incorporating supervised information from dependency between utterances may enhance the self-attention and further improve the accuracy of dialogue act recognition. | Table TABREF20 , Table TABREF22, Table TABREF23 |
0fcac64544842dd06d14151df8c72fc6de5d695c | 0fcac64544842dd06d14151df8c72fc6de5d695c_0 | Q: What previous methods is the proposed method compared against?
Text: Introduction
Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context.
Many approaches have been proposed for DA recognition. Previous work relies heavily on handcrafted features which are domain-specific and difficult to scale up BIBREF4, BIBREF5, BIBREF6. Recently, with great ability to do feature extraction, deep learning has yielded state-of-the-art results for many NLP tasks, and also makes impressive advances in DA recognition. BIBREF7, BIBREF8 built hierarchical CNN/RNN models to encode sentence and incorporate context information for DA recognition. BIBREF9 achieved promising performance by adding the CRF to enhance the dependency between labels. BIBREF10 applied the self-attention mechanism coupled with a hierarchical recurrent neural network.
However, previous approaches cannot make full use of the relative position relationship between utterances. It is natural that utterances in the local context always have strong dependencies in our daily dialog. In this paper, we propose a hierarchical model based on self-attention BIBREF11 and revise the attention distribution to focus on a local and contextual semantic information by a learnable Gaussian bias which represents the relative position information between utterances, inspired by BIBREF12. Further, to analyze the effect of dialog length quantitatively, we introduce a new dialog segmentation mechanism for the DA task and evaluate the performance of different dialogue length and context padding length under online and offline settings. Experiment and visualization show that our method can learn the local contextual dependency between utterances explicitly and achieve promising performance in two well-known datasets.
The contributions of this paper are:
We design a hierarchical model based on self-attention and revise the attention distribution to focus on a local and contextual semantic information by the relative position information between utterances.
We introduce a new dialog segmentation mechaism for the DA task and analyze the effect of dialog length and context padding length.
In addition to traditional offline prediction, we also analyze the accuracy and time complexity under the online setting.
Background ::: Related Work
DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Early studies rely heavily on handcrafted features such as lexical, syntactic, contextual, prosodic and speaker information and achieve good results BIBREF13, BIBREF4, BIBREF16.
Recent studies have applied deep learning based model for DA recognition. BIBREF14 proposed a model based on RNNs and CNNs that incorporates preceding short texts to classify current DAs. BIBREF7, BIBREF8 used hierarchical CNN and RNN to model the utterance sequence in the conversation, which can extract high-level sentence information to predict its label. They found that there is a small performance difference among different hierarchical CNN and RNN approaches. BIBREF9 added a CRF layer on the top of the hierarchical network to model the label transition dependency. BIBREF10 applied the context-aware self-attention mechanism coupled with a hierarchical recurrent neural network and got a significant improvement over state-of-the-art results on SwDA datasets. On another aspect, BIBREF17 combined a recurrent neural network language model with a latent variable model over DAs. BIBREF18 proposed a Discrete Information Variational Autoencoders (DI-VAE) model to learn discrete latent actions to incorporate sentence-level distributional semantics for dialogue generation.
Background ::: Self-Attention
Self-attention BIBREF11 achieves great success for its efficiently parallel computation and long-range dependency modeling.
Given the input sequence $ s = \left( s_1,...,s_n \right) $ of n elements where $ s_i \in \mathbb {R}^{d_s} $. Each attention head holds three parameter matrices, $W_h^Q, W_h^K, W_h^V \in {\mathbb {R}}^{d_s \times d_z} $ where $ h $ present the index of head. For the head $h$, linear projection is applied to the sequence $s$ to obtain key (K), query (Q), and value (V) representations. the attention module gets the weight by computing dot-products between key/query pair and then $softmax$ normalizes the result. it is defined as:
where $\sqrt{d_z}$ is the scaling factor to counteract this effect that the dot products may grow large in magnitude. For all the heads,
where $W^O \in \mathbb {R}^{(d_z*h)\times d_s}$ is the output projection.
One weakness of self-attention model it that they cannot encode the position information efficiently. Some methods have been proposed to encode the relative or absolute position of tokens in the sequence as the additional input to the model. BIBREF11 used sine and cosine functions of different frequencies and added positional encodings to the input embeddings together. It used absolute position embedding to capture relative positional relation by the characteristic of sine and cosine functions. Moreover, several studies show that explicitly modeling relative position can further improve performance. For example, BIBREF19 proposed relative position encoding to explicitly model relative position by independent semantic parameter. It demonstrated significant improvements even when entirely replacing conventional absolute position encodings. BIBREF12 proposed to model localness for the self-attention network by a learnable Gaussian bias which enhanced the ability to model local relationship and demonstrated the effectiveness on the translation task.
In our study, we design a local contextual attention model, which incorporates relative position information by a learnable Gaussian bias into original attention distribution. Different from BIBREF12, in our method, the distribution center is regulated around the corresponding utterance with a window, which indicates the context dependency preference, for capturing more local contextual dependency.
Methodology
Before we describe the proposed model in detail, we first define the mathematical notation for the DA recognition task in this paper. Given the dataset, $X = (D_1,D_2,... D_L)$ with corresponding DA labels $(Y_1,Y_2,...Y_L)$. Each dialogue is a sequence of $ N_l $ utterances $ D_l = (u_1,u_2,...u_{N_l})$ with $ Y_l = (y_1,y_2,...y_{N_l}) $. Each utterance is padded or truncated to the length of $ M $ words, $u_j = (w_1,w_2,...w_{M})$.
Figure FIGREF6 shows our overall model structure. For the first layer, we encode each utterance $u_j$ into a vector representation. Each word $w_m$ of the utterance $u_j$ is converted into dense vector representations $e_m$ from one-hot token representation. And then, we apply LSTM BIBREF20, a powerful and effective structure for sequence modeling, to encode the word sequence. Formally, for the utterance $u_j$:
where $embed$ represents the embedding layer which can be initialized by pre-trained embeddings. To make a fair comparison with previous work, we do not use the fine-grained embedding presented in BIBREF21. LSTM helps us get the context-aware sentence representation for the input sequence. There are several approaches to represent the sentence from the words. Following BIBREF22, we add a max-pooling layer after LSTM, which selects the maximum value in each dimension from the hidden units. In our experiment, LSTM with max-pooling does perform a little better than LSTM with last-pooling, which is used in BIBREF9.
Afterwards, we get the utterances vector representations $ u = (u_1,...,u_{N_l}) $ of $N_l$ elements for the dialogue $D_l$ where $ u_j \in \mathbb {R}^{d_s}, d_s$ is the dimension of hidden units. As we discussed in section SECREF7, given the sequence $ s \in \mathbb {R}^{N_l*d_s}$, self-attention mechanism calculates the attention weights between each pair of utterances in the sequence and get the weighted sum as output. The attention module explicitly models the context dependency between utterances. We employ a residual connection BIBREF23 around the attention module, which represents the dependency encoder between utterances, and the current utterance encoder $s$:
Finally, we apply a two-layer fully connected network with a Rectified Linear Unit (ReLU) to get the final classification output for each utterance.
Methodology ::: Modeling Local Contextual Attention
The attention explicitly models the interaction between the utterances. However, for context modeling, original attention mechanism always considers all of the utterances in a dialogue which inhibits the relation among the local context and is prone to overfitting during training. It is natural that utterances in the local context always have strong dependencies in our daily dialog. Therefore, we add a learnable Gaussian bias with the local constraint to the weight normalized by $softmax$ to enhance the interaction between concerned utterances and its neighbors.
The attention module formula is revised as:
The first term is the original dot product self-attention model. $POS \in \mathbb {R}^{N\times N}$ is the bias matrix, where N is the length of dialogue. The element $POS_{i,j}$ is defined following by gaussian distribution:
$POS_{i,j}$ measures the dependency between the utterance $u_j$ and the utterance $u_i$ in terms of the relative position prior. $w_{i}$ represents for the standard deviation, which controls the weight decaying. Because of local constraint, $|c_{i} - i| <= C$, for each utterance $u_i$, the predicted center position $c_{i}$ and window size $ w_{i}$ is defined as followed:
where $W_i^c,W_i^d \in \mathbb {R}^{1*N}$ are both learnable parameters. We initialized the parameter $W_i^c$ to 0, which leads to center position $ c_i = i $ by default. Furthermore, $c_{i}$ and $w_{i}$ are both related to the semantic context of the utterances, so we assign the mean of key $\overline{K}$ in attention mechanism to represent the context information. Moreover, the central position also indicates the dependency preference of the preceding utterances or subsequent utterances.
It is worth noting that there is a little difference with BIBREF12, although we both revise the attention module by the Gaussian distribution. In our method, for the given utterance $u_{i}$, the distribution center $c_{i}$ is regulated for capturing the not only local but also contextual dependency, which can be formally expressed as: $c_{i} \in (i-C,i+C)$. However, in their work, the distribution center can be anywhere in the sequence, and it is designed for capturing the phrasal patterns, which are essential for Neural Machine Translation task.
Methodology ::: Online and Offline Predictions
Previous work mainly focuses on the offline setting where we can access the whole utterances in the dialogue and predict all the DA labels simultaneously. However, the online setting is the natural demand in our real-time applications. For the online setting, we only care about the recognition result of the last utterance in the given context, as seen in the area with the red dashed line in Figure FIGREF6, our model is well compatible with online setting, we can calculate the attention between the last utterance and the other utterances directly where $K \in \mathbb {R}^{1\times d}, Q \in \mathbb {R}^{n\times d}, V \in \mathbb {R}^{n\times d}$. For LSTM, we still have to model the entire sequence, which is slower than attention based models. Table TABREF17 shows the time complexity comparison excluding the time cost of first layer encoding, and the dialogue length $n$ is smaller than the representation dimension $d$. Our model is easy to expand into the online setting, however, to have a fair comparison with previous work, in our experiments, we applied the models under the offline setting by default.
Methodology ::: Separate into Sub-dialogues
The length of different dialogues in the dataset varies a lot. It is worth noting that the length of dialog affects the model prediction. On the one hand, under the offline setting, we can access the whole utterances in the dialogue and predict all the DA labels simultaneously, so the more utterances, the more efficient. However, on the other hand, if we put too many utterances in once prediction, it will model too much unrelated dependency in the long utterances sequence for both LSTM and attention mechanism based model. The sub-dialogues with the same length also enable efficiently batch training. To study how the dialogue length and context padding length will affect the performance, so we defined a sliding window $W$ which is the sub-dialogue length. Then, we separate each long dialogue into several small sub-dialogues. For example, the dialog $D$ is a sequence of utterances with length $n$, and we will get $\lceil x/w \rceil $ sub-dialogues, for the k-th sub-dialogues, the utterances sequence is $(u_{(k-1)*W+1},u_{(k-1)*W+2},...,u_{k*W})$. In order to avoid losing some context information caused by being separated, which will affect the context modeling for the utterances in the begin and end of the sub-dialog, we add the corresponding context with $P$ (stands for context padding) utterances at the begin and the end of each sliding window, so for the k-th sub-dialogues, the revised utterances sequence is $(u_{(k-1)*W-P+1},u_{(k-1)*W-P+2},...,u_{k*W+P})$. Moreover, we mask the loss for the context padding utterances, which can be formally expressed as:
$M(i)=0$ if utterance $i$ is in the context padding otherwise 1, $L$ is the cross entropy.
The $W$ and $P$ are both hyperparameters; in the experiment SECREF21, we will talk about the effect of the window size and the context padding length.
Experiments ::: Datasets
We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwDA has been widely used in previous work for the DA recognition task. It is annotated on 1155 human to human telephonic conversations about the given topic. Each utterance in the conversation is manually labeled as one of 42 dialogue acts according to SWBD-DAMSL taxonomy BIBREF25. In BIBREF10, they used 43 categories of dialogue acts, which is different from us and previous work. The difference in the number of labels is mainly due to the special label “+”, which represents that the utterance is interrupted by the other speaker (and thus split into two or more parts). We used the same processing with BIBREF26, which concatenated the parts of an interrupted utterance together, giving the result the tag of the first part and putting it in its place in the conversation sequence. It is critical for fair comparison because there are nearly 8% data has the label “+”. Lacking standard splits, we followed the training/validation/test splits by BIBREF14. DailyDialog dataset contains 13118 multi-turn dialogues, which mainly reflect our daily communication style. It covers various topics about our daily life. Each utterance in the conversation is manually labeled as one out of 4 dialogue act classes. Table TABREF18 presents the statistics for both datasets. In our preprocessing, the text was lowercased before tokenized, and then sentences were tokenized by WordPiece tokenizer BIBREF27 with a 30,000 token vocabulary to alleviate the Out-of-Vocabulary problem.
[1]The author claimed that they achieved 78.7%(81.3%) accuracy with pre-trained word embedding (fine-grained embedding). For a fair comparison, both previous and our work is simply based on pre-trained word embedding. [2]The author randomly selected two test sets which are different from previous and our work and achieved 77.15% and 79.74%, and we reimplemented in standard test sets.
Experiments ::: Results on SwDA
In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. It is worth noting that BIBREF10 combined GloVeBIBREF28 and pre-trained ELMo representationsBIBREF29 as word embeddings. However, in our work, we only applied the pre-trained word embedding. To illustrate the importance of context information, we also evaluate several sentence classification methods (CNN, LSTM, BERT) as baselines. For baseline models, both CNN and LSTM, got similar accuracy (75.27% and 75.59% respectively). We also fine-tuned BERT BIBREF30 to do recognition based on single utterance. As seen, with the powerful unsupervised pre-trained language model, BERT (76.88% accuracy) outperformed LSTM and CNN models for single sentence classification. However, it was still much lower than the models based on context information. It indicates that context information is crucial in the DA recognition task. BERT can boost performance in a large margin. However, it costs too much time and resources. In this reason, we chose LSTM as our utterance encoder in further experiment.
By modeling context information, the performance of the hierarchical model is improved by at least 3%, even compared to BERT. In order to better analyze the semantic dependency learned by attention, in our experiments, we removed the CRF module. In terms of different hierarchical models, our LSTM+BLSTM achieved good result. The accuracy was 80.00% which is even a little better than Hierarchical BLSTM-CRF BIBREF9. Relying on attention mechanism and local contextual modeling, our model, LSTM+Attention and LSTM+Local Contextual Attention, achieved 80.12% and 80.34% accuracy respectively. Compared with the previous best approach Hierarchical BLSTM-CRF, we can obtain a relative accuracy gain with 1.1% by our best model. It indicated that self-attention model can capture context dependency better than the BLSTM model. With adding the local constraint, we can get an even better result.
To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result. It is worth noting that it is actually the same as single sentence classification when $P = 0$ (without any context provided). First, we set $W$ to 1 to discuss how the length of context padding will affect. As seen in the result, the accuracy increased when more context padding was used for both LSTM+BLSTM and LSTM+Attention approaches, so we did not evaluate the performance of LSTM+LC Attention when context padding is small. There was no further accuracy improvement when the length of context padding was beyond 5. Therefore, we fixed the context padding length $P$ to 5 and increased the size of the sliding window to see how it works. With sliding window size increasing, the more context was involved together with more unnecessary information. From the experiments, we can see that both LSTM+BLSTM and LSTM+Attention achieved the best performance when window size was 1 and context padding length was 5. When window size increased, the performances of these two models dropped. However, our model (LSTM+LC Attention) can leverage the context information more efficiently, which achieved the best performance when window size was 10, and the model was more stable and robust to the different setting of window size.
For online prediction, we only care about the recognition result of the last utterance in the given context. We added 5 preceding utterances as context padding for every predicted utterance because we cannot access subsequent utterances in the online setting. As seen in Table TABREF22, without subsequent utterances, the performances of these three models dropped. However, LSTM+LC Attention still outperformed the other two models.
Experiments ::: Result on DailyDialog
The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models. From table TABREF18 we can see that, the average dialogue length $|U|$ in DailyDialog is much shorter than the average length of SwDA. So, in our experiment, we set the maximum of the $W$ to 10, which almost covers the whole utterances in the dialogue. Using the same way as SwDA dataset, we, first, set W to 1 and increased the length of context padding. As seen, modeling local context information, hierarchical models yielded significant improvement than sentence classification. There was no further accuracy improvement when the length of context padding was beyond 2, so we fixed the context padding length P to 2 and increased the size of sliding window size W. From the experiments, we can see that LSTM+Attention always got a little better accuracy than LSTM+BLSTM. With window size increasing, the performances of these two models dropped. Relying on modeling local contextual information, LSTM+LC Attention achieved the best accuracy (85.81%) when the window size was 5. For the longer sliding window, the performance of LSTM+LC Attention was still better and more robust than the other two models. For online prediction, we added 2 preceding utterances as context padding, and the experiment shows that LSTM+LC Attention outperformed the other two models under the online setting, although the performances of these three models dropped without subsequent utterances.
Experiments ::: Visualization
In this section, we visualize the attention weights for analyzing how local contextual attention works in detail. Figure FIGREF24 shows the visualization of original attention and local contextual attention for the example dialogue shown in Table TABREF1. The attention matrix $M$ explicitly measures the dependency among utterances. Each row of grids is normalized by $softmax$, $M_{ij}$ represents for the dependency score between the utterance i and utterance j. As demonstrated in Figure FIGREF24, there are some wrong and uninterpretable attention weights annotated with red color, which is learned by the original attention. The original attention model gives the utterance “B: Hi” (position 0) and “A: Okay.” (position 7) a high dependency score. However, local contextual attention weakens its attention weights due to the long distance apart.
Overall, the additional Gaussian bias trend to centralize the attention distribution to the diagonal of the matrix, which is in line with our linguistic intuition that utterances that are far apart usually don't have too strong dependencies. As demonstrated in Figure FIGREF24, benefiting of the additional Gaussian bias, the revised attention mechanism weakens the attention weights between utterances which cross the long relative distance. For the grids near diagonal, it strengthens their dependency score and doesn't bring other useless dependencies for its learnable magnitude.
Conclusions and Future Work
In the paper, we propose our hierarchical model with local contextual attention to the Dialogue Act Recognition task. Our model can explicitly capture the semantic dependencies between utterances inside the dialogue. To enhance our model with local contextual information, we revise the attention distribution by a learnable Gaussian bias to make it focus on the local neighbors. Based on our dialog segmentation mechanism, we find that local contextual attention reduces the noises through relative position information, which is essential for dialogue act recognition. And this segmentation mechanism can be applied under online and offline settings. Our model achieves promising performance in two well-known datasets, which shows that modeling local contextual information is crucial for dialogue act recognition.
There is a close relation between dialogue act recognition and discourse parsing BIBREF31. The most discourse parsing process is composed of two stages: structure construction and dependency labeling BIBREF32, BIBREF33. For future work, a promising direction is to apply our method to multi-task training with two stages jointly. Incorporating supervised information from dependency between utterances may enhance the self-attention and further improve the accuracy of dialogue act recognition. | BLSTM+Attention+BLSTM
Hierarchical BLSTM-CRF
CRF-ASN
Hierarchical CNN (window 4)
mLSTM-RNN
DRLM-Conditional
LSTM-Softmax
RCNN
CNN
CRF
LSTM
BERT |
4e841138f307839fd2c212e9f02489e27a5f830c | 4e841138f307839fd2c212e9f02489e27a5f830c_0 | Q: What is dialogue act recognition?
Text: Introduction
Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context.
Many approaches have been proposed for DA recognition. Previous work relies heavily on handcrafted features which are domain-specific and difficult to scale up BIBREF4, BIBREF5, BIBREF6. Recently, with great ability to do feature extraction, deep learning has yielded state-of-the-art results for many NLP tasks, and also makes impressive advances in DA recognition. BIBREF7, BIBREF8 built hierarchical CNN/RNN models to encode sentence and incorporate context information for DA recognition. BIBREF9 achieved promising performance by adding the CRF to enhance the dependency between labels. BIBREF10 applied the self-attention mechanism coupled with a hierarchical recurrent neural network.
However, previous approaches cannot make full use of the relative position relationship between utterances. It is natural that utterances in the local context always have strong dependencies in our daily dialog. In this paper, we propose a hierarchical model based on self-attention BIBREF11 and revise the attention distribution to focus on a local and contextual semantic information by a learnable Gaussian bias which represents the relative position information between utterances, inspired by BIBREF12. Further, to analyze the effect of dialog length quantitatively, we introduce a new dialog segmentation mechanism for the DA task and evaluate the performance of different dialogue length and context padding length under online and offline settings. Experiment and visualization show that our method can learn the local contextual dependency between utterances explicitly and achieve promising performance in two well-known datasets.
The contributions of this paper are:
We design a hierarchical model based on self-attention and revise the attention distribution to focus on a local and contextual semantic information by the relative position information between utterances.
We introduce a new dialog segmentation mechaism for the DA task and analyze the effect of dialog length and context padding length.
In addition to traditional offline prediction, we also analyze the accuracy and time complexity under the online setting.
Background ::: Related Work
DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Early studies rely heavily on handcrafted features such as lexical, syntactic, contextual, prosodic and speaker information and achieve good results BIBREF13, BIBREF4, BIBREF16.
Recent studies have applied deep learning based model for DA recognition. BIBREF14 proposed a model based on RNNs and CNNs that incorporates preceding short texts to classify current DAs. BIBREF7, BIBREF8 used hierarchical CNN and RNN to model the utterance sequence in the conversation, which can extract high-level sentence information to predict its label. They found that there is a small performance difference among different hierarchical CNN and RNN approaches. BIBREF9 added a CRF layer on the top of the hierarchical network to model the label transition dependency. BIBREF10 applied the context-aware self-attention mechanism coupled with a hierarchical recurrent neural network and got a significant improvement over state-of-the-art results on SwDA datasets. On another aspect, BIBREF17 combined a recurrent neural network language model with a latent variable model over DAs. BIBREF18 proposed a Discrete Information Variational Autoencoders (DI-VAE) model to learn discrete latent actions to incorporate sentence-level distributional semantics for dialogue generation.
Background ::: Self-Attention
Self-attention BIBREF11 achieves great success for its efficiently parallel computation and long-range dependency modeling.
Given the input sequence $ s = \left( s_1,...,s_n \right) $ of n elements where $ s_i \in \mathbb {R}^{d_s} $. Each attention head holds three parameter matrices, $W_h^Q, W_h^K, W_h^V \in {\mathbb {R}}^{d_s \times d_z} $ where $ h $ present the index of head. For the head $h$, linear projection is applied to the sequence $s$ to obtain key (K), query (Q), and value (V) representations. the attention module gets the weight by computing dot-products between key/query pair and then $softmax$ normalizes the result. it is defined as:
where $\sqrt{d_z}$ is the scaling factor to counteract this effect that the dot products may grow large in magnitude. For all the heads,
where $W^O \in \mathbb {R}^{(d_z*h)\times d_s}$ is the output projection.
One weakness of self-attention model it that they cannot encode the position information efficiently. Some methods have been proposed to encode the relative or absolute position of tokens in the sequence as the additional input to the model. BIBREF11 used sine and cosine functions of different frequencies and added positional encodings to the input embeddings together. It used absolute position embedding to capture relative positional relation by the characteristic of sine and cosine functions. Moreover, several studies show that explicitly modeling relative position can further improve performance. For example, BIBREF19 proposed relative position encoding to explicitly model relative position by independent semantic parameter. It demonstrated significant improvements even when entirely replacing conventional absolute position encodings. BIBREF12 proposed to model localness for the self-attention network by a learnable Gaussian bias which enhanced the ability to model local relationship and demonstrated the effectiveness on the translation task.
In our study, we design a local contextual attention model, which incorporates relative position information by a learnable Gaussian bias into original attention distribution. Different from BIBREF12, in our method, the distribution center is regulated around the corresponding utterance with a window, which indicates the context dependency preference, for capturing more local contextual dependency.
Methodology
Before we describe the proposed model in detail, we first define the mathematical notation for the DA recognition task in this paper. Given the dataset, $X = (D_1,D_2,... D_L)$ with corresponding DA labels $(Y_1,Y_2,...Y_L)$. Each dialogue is a sequence of $ N_l $ utterances $ D_l = (u_1,u_2,...u_{N_l})$ with $ Y_l = (y_1,y_2,...y_{N_l}) $. Each utterance is padded or truncated to the length of $ M $ words, $u_j = (w_1,w_2,...w_{M})$.
Figure FIGREF6 shows our overall model structure. For the first layer, we encode each utterance $u_j$ into a vector representation. Each word $w_m$ of the utterance $u_j$ is converted into dense vector representations $e_m$ from one-hot token representation. And then, we apply LSTM BIBREF20, a powerful and effective structure for sequence modeling, to encode the word sequence. Formally, for the utterance $u_j$:
where $embed$ represents the embedding layer which can be initialized by pre-trained embeddings. To make a fair comparison with previous work, we do not use the fine-grained embedding presented in BIBREF21. LSTM helps us get the context-aware sentence representation for the input sequence. There are several approaches to represent the sentence from the words. Following BIBREF22, we add a max-pooling layer after LSTM, which selects the maximum value in each dimension from the hidden units. In our experiment, LSTM with max-pooling does perform a little better than LSTM with last-pooling, which is used in BIBREF9.
Afterwards, we get the utterances vector representations $ u = (u_1,...,u_{N_l}) $ of $N_l$ elements for the dialogue $D_l$ where $ u_j \in \mathbb {R}^{d_s}, d_s$ is the dimension of hidden units. As we discussed in section SECREF7, given the sequence $ s \in \mathbb {R}^{N_l*d_s}$, self-attention mechanism calculates the attention weights between each pair of utterances in the sequence and get the weighted sum as output. The attention module explicitly models the context dependency between utterances. We employ a residual connection BIBREF23 around the attention module, which represents the dependency encoder between utterances, and the current utterance encoder $s$:
Finally, we apply a two-layer fully connected network with a Rectified Linear Unit (ReLU) to get the final classification output for each utterance.
Methodology ::: Modeling Local Contextual Attention
The attention explicitly models the interaction between the utterances. However, for context modeling, original attention mechanism always considers all of the utterances in a dialogue which inhibits the relation among the local context and is prone to overfitting during training. It is natural that utterances in the local context always have strong dependencies in our daily dialog. Therefore, we add a learnable Gaussian bias with the local constraint to the weight normalized by $softmax$ to enhance the interaction between concerned utterances and its neighbors.
The attention module formula is revised as:
The first term is the original dot product self-attention model. $POS \in \mathbb {R}^{N\times N}$ is the bias matrix, where N is the length of dialogue. The element $POS_{i,j}$ is defined following by gaussian distribution:
$POS_{i,j}$ measures the dependency between the utterance $u_j$ and the utterance $u_i$ in terms of the relative position prior. $w_{i}$ represents for the standard deviation, which controls the weight decaying. Because of local constraint, $|c_{i} - i| <= C$, for each utterance $u_i$, the predicted center position $c_{i}$ and window size $ w_{i}$ is defined as followed:
where $W_i^c,W_i^d \in \mathbb {R}^{1*N}$ are both learnable parameters. We initialized the parameter $W_i^c$ to 0, which leads to center position $ c_i = i $ by default. Furthermore, $c_{i}$ and $w_{i}$ are both related to the semantic context of the utterances, so we assign the mean of key $\overline{K}$ in attention mechanism to represent the context information. Moreover, the central position also indicates the dependency preference of the preceding utterances or subsequent utterances.
It is worth noting that there is a little difference with BIBREF12, although we both revise the attention module by the Gaussian distribution. In our method, for the given utterance $u_{i}$, the distribution center $c_{i}$ is regulated for capturing the not only local but also contextual dependency, which can be formally expressed as: $c_{i} \in (i-C,i+C)$. However, in their work, the distribution center can be anywhere in the sequence, and it is designed for capturing the phrasal patterns, which are essential for Neural Machine Translation task.
Methodology ::: Online and Offline Predictions
Previous work mainly focuses on the offline setting where we can access the whole utterances in the dialogue and predict all the DA labels simultaneously. However, the online setting is the natural demand in our real-time applications. For the online setting, we only care about the recognition result of the last utterance in the given context, as seen in the area with the red dashed line in Figure FIGREF6, our model is well compatible with online setting, we can calculate the attention between the last utterance and the other utterances directly where $K \in \mathbb {R}^{1\times d}, Q \in \mathbb {R}^{n\times d}, V \in \mathbb {R}^{n\times d}$. For LSTM, we still have to model the entire sequence, which is slower than attention based models. Table TABREF17 shows the time complexity comparison excluding the time cost of first layer encoding, and the dialogue length $n$ is smaller than the representation dimension $d$. Our model is easy to expand into the online setting, however, to have a fair comparison with previous work, in our experiments, we applied the models under the offline setting by default.
Methodology ::: Separate into Sub-dialogues
The length of different dialogues in the dataset varies a lot. It is worth noting that the length of dialog affects the model prediction. On the one hand, under the offline setting, we can access the whole utterances in the dialogue and predict all the DA labels simultaneously, so the more utterances, the more efficient. However, on the other hand, if we put too many utterances in once prediction, it will model too much unrelated dependency in the long utterances sequence for both LSTM and attention mechanism based model. The sub-dialogues with the same length also enable efficiently batch training. To study how the dialogue length and context padding length will affect the performance, so we defined a sliding window $W$ which is the sub-dialogue length. Then, we separate each long dialogue into several small sub-dialogues. For example, the dialog $D$ is a sequence of utterances with length $n$, and we will get $\lceil x/w \rceil $ sub-dialogues, for the k-th sub-dialogues, the utterances sequence is $(u_{(k-1)*W+1},u_{(k-1)*W+2},...,u_{k*W})$. In order to avoid losing some context information caused by being separated, which will affect the context modeling for the utterances in the begin and end of the sub-dialog, we add the corresponding context with $P$ (stands for context padding) utterances at the begin and the end of each sliding window, so for the k-th sub-dialogues, the revised utterances sequence is $(u_{(k-1)*W-P+1},u_{(k-1)*W-P+2},...,u_{k*W+P})$. Moreover, we mask the loss for the context padding utterances, which can be formally expressed as:
$M(i)=0$ if utterance $i$ is in the context padding otherwise 1, $L$ is the cross entropy.
The $W$ and $P$ are both hyperparameters; in the experiment SECREF21, we will talk about the effect of the window size and the context padding length.
Experiments ::: Datasets
We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwDA has been widely used in previous work for the DA recognition task. It is annotated on 1155 human to human telephonic conversations about the given topic. Each utterance in the conversation is manually labeled as one of 42 dialogue acts according to SWBD-DAMSL taxonomy BIBREF25. In BIBREF10, they used 43 categories of dialogue acts, which is different from us and previous work. The difference in the number of labels is mainly due to the special label “+”, which represents that the utterance is interrupted by the other speaker (and thus split into two or more parts). We used the same processing with BIBREF26, which concatenated the parts of an interrupted utterance together, giving the result the tag of the first part and putting it in its place in the conversation sequence. It is critical for fair comparison because there are nearly 8% data has the label “+”. Lacking standard splits, we followed the training/validation/test splits by BIBREF14. DailyDialog dataset contains 13118 multi-turn dialogues, which mainly reflect our daily communication style. It covers various topics about our daily life. Each utterance in the conversation is manually labeled as one out of 4 dialogue act classes. Table TABREF18 presents the statistics for both datasets. In our preprocessing, the text was lowercased before tokenized, and then sentences were tokenized by WordPiece tokenizer BIBREF27 with a 30,000 token vocabulary to alleviate the Out-of-Vocabulary problem.
[1]The author claimed that they achieved 78.7%(81.3%) accuracy with pre-trained word embedding (fine-grained embedding). For a fair comparison, both previous and our work is simply based on pre-trained word embedding. [2]The author randomly selected two test sets which are different from previous and our work and achieved 77.15% and 79.74%, and we reimplemented in standard test sets.
Experiments ::: Results on SwDA
In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. It is worth noting that BIBREF10 combined GloVeBIBREF28 and pre-trained ELMo representationsBIBREF29 as word embeddings. However, in our work, we only applied the pre-trained word embedding. To illustrate the importance of context information, we also evaluate several sentence classification methods (CNN, LSTM, BERT) as baselines. For baseline models, both CNN and LSTM, got similar accuracy (75.27% and 75.59% respectively). We also fine-tuned BERT BIBREF30 to do recognition based on single utterance. As seen, with the powerful unsupervised pre-trained language model, BERT (76.88% accuracy) outperformed LSTM and CNN models for single sentence classification. However, it was still much lower than the models based on context information. It indicates that context information is crucial in the DA recognition task. BERT can boost performance in a large margin. However, it costs too much time and resources. In this reason, we chose LSTM as our utterance encoder in further experiment.
By modeling context information, the performance of the hierarchical model is improved by at least 3%, even compared to BERT. In order to better analyze the semantic dependency learned by attention, in our experiments, we removed the CRF module. In terms of different hierarchical models, our LSTM+BLSTM achieved good result. The accuracy was 80.00% which is even a little better than Hierarchical BLSTM-CRF BIBREF9. Relying on attention mechanism and local contextual modeling, our model, LSTM+Attention and LSTM+Local Contextual Attention, achieved 80.12% and 80.34% accuracy respectively. Compared with the previous best approach Hierarchical BLSTM-CRF, we can obtain a relative accuracy gain with 1.1% by our best model. It indicated that self-attention model can capture context dependency better than the BLSTM model. With adding the local constraint, we can get an even better result.
To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result. It is worth noting that it is actually the same as single sentence classification when $P = 0$ (without any context provided). First, we set $W$ to 1 to discuss how the length of context padding will affect. As seen in the result, the accuracy increased when more context padding was used for both LSTM+BLSTM and LSTM+Attention approaches, so we did not evaluate the performance of LSTM+LC Attention when context padding is small. There was no further accuracy improvement when the length of context padding was beyond 5. Therefore, we fixed the context padding length $P$ to 5 and increased the size of the sliding window to see how it works. With sliding window size increasing, the more context was involved together with more unnecessary information. From the experiments, we can see that both LSTM+BLSTM and LSTM+Attention achieved the best performance when window size was 1 and context padding length was 5. When window size increased, the performances of these two models dropped. However, our model (LSTM+LC Attention) can leverage the context information more efficiently, which achieved the best performance when window size was 10, and the model was more stable and robust to the different setting of window size.
For online prediction, we only care about the recognition result of the last utterance in the given context. We added 5 preceding utterances as context padding for every predicted utterance because we cannot access subsequent utterances in the online setting. As seen in Table TABREF22, without subsequent utterances, the performances of these three models dropped. However, LSTM+LC Attention still outperformed the other two models.
Experiments ::: Result on DailyDialog
The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models. From table TABREF18 we can see that, the average dialogue length $|U|$ in DailyDialog is much shorter than the average length of SwDA. So, in our experiment, we set the maximum of the $W$ to 10, which almost covers the whole utterances in the dialogue. Using the same way as SwDA dataset, we, first, set W to 1 and increased the length of context padding. As seen, modeling local context information, hierarchical models yielded significant improvement than sentence classification. There was no further accuracy improvement when the length of context padding was beyond 2, so we fixed the context padding length P to 2 and increased the size of sliding window size W. From the experiments, we can see that LSTM+Attention always got a little better accuracy than LSTM+BLSTM. With window size increasing, the performances of these two models dropped. Relying on modeling local contextual information, LSTM+LC Attention achieved the best accuracy (85.81%) when the window size was 5. For the longer sliding window, the performance of LSTM+LC Attention was still better and more robust than the other two models. For online prediction, we added 2 preceding utterances as context padding, and the experiment shows that LSTM+LC Attention outperformed the other two models under the online setting, although the performances of these three models dropped without subsequent utterances.
Experiments ::: Visualization
In this section, we visualize the attention weights for analyzing how local contextual attention works in detail. Figure FIGREF24 shows the visualization of original attention and local contextual attention for the example dialogue shown in Table TABREF1. The attention matrix $M$ explicitly measures the dependency among utterances. Each row of grids is normalized by $softmax$, $M_{ij}$ represents for the dependency score between the utterance i and utterance j. As demonstrated in Figure FIGREF24, there are some wrong and uninterpretable attention weights annotated with red color, which is learned by the original attention. The original attention model gives the utterance “B: Hi” (position 0) and “A: Okay.” (position 7) a high dependency score. However, local contextual attention weakens its attention weights due to the long distance apart.
Overall, the additional Gaussian bias trend to centralize the attention distribution to the diagonal of the matrix, which is in line with our linguistic intuition that utterances that are far apart usually don't have too strong dependencies. As demonstrated in Figure FIGREF24, benefiting of the additional Gaussian bias, the revised attention mechanism weakens the attention weights between utterances which cross the long relative distance. For the grids near diagonal, it strengthens their dependency score and doesn't bring other useless dependencies for its learnable magnitude.
Conclusions and Future Work
In the paper, we propose our hierarchical model with local contextual attention to the Dialogue Act Recognition task. Our model can explicitly capture the semantic dependencies between utterances inside the dialogue. To enhance our model with local contextual information, we revise the attention distribution by a learnable Gaussian bias to make it focus on the local neighbors. Based on our dialog segmentation mechanism, we find that local contextual attention reduces the noises through relative position information, which is essential for dialogue act recognition. And this segmentation mechanism can be applied under online and offline settings. Our model achieves promising performance in two well-known datasets, which shows that modeling local contextual information is crucial for dialogue act recognition.
There is a close relation between dialogue act recognition and discourse parsing BIBREF31. The most discourse parsing process is composed of two stages: structure construction and dependency labeling BIBREF32, BIBREF33. For future work, a promising direction is to apply our method to multi-task training with two stages jointly. Incorporating supervised information from dependency between utterances may enhance the self-attention and further improve the accuracy of dialogue act recognition. | DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. |
37103369e5792ece49a71666489016c4cea94cda | 37103369e5792ece49a71666489016c4cea94cda_0 | Q: Which natural language(s) are studied?
Text: Introduction
Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context.
Many approaches have been proposed for DA recognition. Previous work relies heavily on handcrafted features which are domain-specific and difficult to scale up BIBREF4, BIBREF5, BIBREF6. Recently, with great ability to do feature extraction, deep learning has yielded state-of-the-art results for many NLP tasks, and also makes impressive advances in DA recognition. BIBREF7, BIBREF8 built hierarchical CNN/RNN models to encode sentence and incorporate context information for DA recognition. BIBREF9 achieved promising performance by adding the CRF to enhance the dependency between labels. BIBREF10 applied the self-attention mechanism coupled with a hierarchical recurrent neural network.
However, previous approaches cannot make full use of the relative position relationship between utterances. It is natural that utterances in the local context always have strong dependencies in our daily dialog. In this paper, we propose a hierarchical model based on self-attention BIBREF11 and revise the attention distribution to focus on a local and contextual semantic information by a learnable Gaussian bias which represents the relative position information between utterances, inspired by BIBREF12. Further, to analyze the effect of dialog length quantitatively, we introduce a new dialog segmentation mechanism for the DA task and evaluate the performance of different dialogue length and context padding length under online and offline settings. Experiment and visualization show that our method can learn the local contextual dependency between utterances explicitly and achieve promising performance in two well-known datasets.
The contributions of this paper are:
We design a hierarchical model based on self-attention and revise the attention distribution to focus on a local and contextual semantic information by the relative position information between utterances.
We introduce a new dialog segmentation mechaism for the DA task and analyze the effect of dialog length and context padding length.
In addition to traditional offline prediction, we also analyze the accuracy and time complexity under the online setting.
Background ::: Related Work
DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Early studies rely heavily on handcrafted features such as lexical, syntactic, contextual, prosodic and speaker information and achieve good results BIBREF13, BIBREF4, BIBREF16.
Recent studies have applied deep learning based model for DA recognition. BIBREF14 proposed a model based on RNNs and CNNs that incorporates preceding short texts to classify current DAs. BIBREF7, BIBREF8 used hierarchical CNN and RNN to model the utterance sequence in the conversation, which can extract high-level sentence information to predict its label. They found that there is a small performance difference among different hierarchical CNN and RNN approaches. BIBREF9 added a CRF layer on the top of the hierarchical network to model the label transition dependency. BIBREF10 applied the context-aware self-attention mechanism coupled with a hierarchical recurrent neural network and got a significant improvement over state-of-the-art results on SwDA datasets. On another aspect, BIBREF17 combined a recurrent neural network language model with a latent variable model over DAs. BIBREF18 proposed a Discrete Information Variational Autoencoders (DI-VAE) model to learn discrete latent actions to incorporate sentence-level distributional semantics for dialogue generation.
Background ::: Self-Attention
Self-attention BIBREF11 achieves great success for its efficiently parallel computation and long-range dependency modeling.
Given the input sequence $ s = \left( s_1,...,s_n \right) $ of n elements where $ s_i \in \mathbb {R}^{d_s} $. Each attention head holds three parameter matrices, $W_h^Q, W_h^K, W_h^V \in {\mathbb {R}}^{d_s \times d_z} $ where $ h $ present the index of head. For the head $h$, linear projection is applied to the sequence $s$ to obtain key (K), query (Q), and value (V) representations. the attention module gets the weight by computing dot-products between key/query pair and then $softmax$ normalizes the result. it is defined as:
where $\sqrt{d_z}$ is the scaling factor to counteract this effect that the dot products may grow large in magnitude. For all the heads,
where $W^O \in \mathbb {R}^{(d_z*h)\times d_s}$ is the output projection.
One weakness of self-attention model it that they cannot encode the position information efficiently. Some methods have been proposed to encode the relative or absolute position of tokens in the sequence as the additional input to the model. BIBREF11 used sine and cosine functions of different frequencies and added positional encodings to the input embeddings together. It used absolute position embedding to capture relative positional relation by the characteristic of sine and cosine functions. Moreover, several studies show that explicitly modeling relative position can further improve performance. For example, BIBREF19 proposed relative position encoding to explicitly model relative position by independent semantic parameter. It demonstrated significant improvements even when entirely replacing conventional absolute position encodings. BIBREF12 proposed to model localness for the self-attention network by a learnable Gaussian bias which enhanced the ability to model local relationship and demonstrated the effectiveness on the translation task.
In our study, we design a local contextual attention model, which incorporates relative position information by a learnable Gaussian bias into original attention distribution. Different from BIBREF12, in our method, the distribution center is regulated around the corresponding utterance with a window, which indicates the context dependency preference, for capturing more local contextual dependency.
Methodology
Before we describe the proposed model in detail, we first define the mathematical notation for the DA recognition task in this paper. Given the dataset, $X = (D_1,D_2,... D_L)$ with corresponding DA labels $(Y_1,Y_2,...Y_L)$. Each dialogue is a sequence of $ N_l $ utterances $ D_l = (u_1,u_2,...u_{N_l})$ with $ Y_l = (y_1,y_2,...y_{N_l}) $. Each utterance is padded or truncated to the length of $ M $ words, $u_j = (w_1,w_2,...w_{M})$.
Figure FIGREF6 shows our overall model structure. For the first layer, we encode each utterance $u_j$ into a vector representation. Each word $w_m$ of the utterance $u_j$ is converted into dense vector representations $e_m$ from one-hot token representation. And then, we apply LSTM BIBREF20, a powerful and effective structure for sequence modeling, to encode the word sequence. Formally, for the utterance $u_j$:
where $embed$ represents the embedding layer which can be initialized by pre-trained embeddings. To make a fair comparison with previous work, we do not use the fine-grained embedding presented in BIBREF21. LSTM helps us get the context-aware sentence representation for the input sequence. There are several approaches to represent the sentence from the words. Following BIBREF22, we add a max-pooling layer after LSTM, which selects the maximum value in each dimension from the hidden units. In our experiment, LSTM with max-pooling does perform a little better than LSTM with last-pooling, which is used in BIBREF9.
Afterwards, we get the utterances vector representations $ u = (u_1,...,u_{N_l}) $ of $N_l$ elements for the dialogue $D_l$ where $ u_j \in \mathbb {R}^{d_s}, d_s$ is the dimension of hidden units. As we discussed in section SECREF7, given the sequence $ s \in \mathbb {R}^{N_l*d_s}$, self-attention mechanism calculates the attention weights between each pair of utterances in the sequence and get the weighted sum as output. The attention module explicitly models the context dependency between utterances. We employ a residual connection BIBREF23 around the attention module, which represents the dependency encoder between utterances, and the current utterance encoder $s$:
Finally, we apply a two-layer fully connected network with a Rectified Linear Unit (ReLU) to get the final classification output for each utterance.
Methodology ::: Modeling Local Contextual Attention
The attention explicitly models the interaction between the utterances. However, for context modeling, original attention mechanism always considers all of the utterances in a dialogue which inhibits the relation among the local context and is prone to overfitting during training. It is natural that utterances in the local context always have strong dependencies in our daily dialog. Therefore, we add a learnable Gaussian bias with the local constraint to the weight normalized by $softmax$ to enhance the interaction between concerned utterances and its neighbors.
The attention module formula is revised as:
The first term is the original dot product self-attention model. $POS \in \mathbb {R}^{N\times N}$ is the bias matrix, where N is the length of dialogue. The element $POS_{i,j}$ is defined following by gaussian distribution:
$POS_{i,j}$ measures the dependency between the utterance $u_j$ and the utterance $u_i$ in terms of the relative position prior. $w_{i}$ represents for the standard deviation, which controls the weight decaying. Because of local constraint, $|c_{i} - i| <= C$, for each utterance $u_i$, the predicted center position $c_{i}$ and window size $ w_{i}$ is defined as followed:
where $W_i^c,W_i^d \in \mathbb {R}^{1*N}$ are both learnable parameters. We initialized the parameter $W_i^c$ to 0, which leads to center position $ c_i = i $ by default. Furthermore, $c_{i}$ and $w_{i}$ are both related to the semantic context of the utterances, so we assign the mean of key $\overline{K}$ in attention mechanism to represent the context information. Moreover, the central position also indicates the dependency preference of the preceding utterances or subsequent utterances.
It is worth noting that there is a little difference with BIBREF12, although we both revise the attention module by the Gaussian distribution. In our method, for the given utterance $u_{i}$, the distribution center $c_{i}$ is regulated for capturing the not only local but also contextual dependency, which can be formally expressed as: $c_{i} \in (i-C,i+C)$. However, in their work, the distribution center can be anywhere in the sequence, and it is designed for capturing the phrasal patterns, which are essential for Neural Machine Translation task.
Methodology ::: Online and Offline Predictions
Previous work mainly focuses on the offline setting where we can access the whole utterances in the dialogue and predict all the DA labels simultaneously. However, the online setting is the natural demand in our real-time applications. For the online setting, we only care about the recognition result of the last utterance in the given context, as seen in the area with the red dashed line in Figure FIGREF6, our model is well compatible with online setting, we can calculate the attention between the last utterance and the other utterances directly where $K \in \mathbb {R}^{1\times d}, Q \in \mathbb {R}^{n\times d}, V \in \mathbb {R}^{n\times d}$. For LSTM, we still have to model the entire sequence, which is slower than attention based models. Table TABREF17 shows the time complexity comparison excluding the time cost of first layer encoding, and the dialogue length $n$ is smaller than the representation dimension $d$. Our model is easy to expand into the online setting, however, to have a fair comparison with previous work, in our experiments, we applied the models under the offline setting by default.
Methodology ::: Separate into Sub-dialogues
The length of different dialogues in the dataset varies a lot. It is worth noting that the length of dialog affects the model prediction. On the one hand, under the offline setting, we can access the whole utterances in the dialogue and predict all the DA labels simultaneously, so the more utterances, the more efficient. However, on the other hand, if we put too many utterances in once prediction, it will model too much unrelated dependency in the long utterances sequence for both LSTM and attention mechanism based model. The sub-dialogues with the same length also enable efficiently batch training. To study how the dialogue length and context padding length will affect the performance, so we defined a sliding window $W$ which is the sub-dialogue length. Then, we separate each long dialogue into several small sub-dialogues. For example, the dialog $D$ is a sequence of utterances with length $n$, and we will get $\lceil x/w \rceil $ sub-dialogues, for the k-th sub-dialogues, the utterances sequence is $(u_{(k-1)*W+1},u_{(k-1)*W+2},...,u_{k*W})$. In order to avoid losing some context information caused by being separated, which will affect the context modeling for the utterances in the begin and end of the sub-dialog, we add the corresponding context with $P$ (stands for context padding) utterances at the begin and the end of each sliding window, so for the k-th sub-dialogues, the revised utterances sequence is $(u_{(k-1)*W-P+1},u_{(k-1)*W-P+2},...,u_{k*W+P})$. Moreover, we mask the loss for the context padding utterances, which can be formally expressed as:
$M(i)=0$ if utterance $i$ is in the context padding otherwise 1, $L$ is the cross entropy.
The $W$ and $P$ are both hyperparameters; in the experiment SECREF21, we will talk about the effect of the window size and the context padding length.
Experiments ::: Datasets
We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwDA has been widely used in previous work for the DA recognition task. It is annotated on 1155 human to human telephonic conversations about the given topic. Each utterance in the conversation is manually labeled as one of 42 dialogue acts according to SWBD-DAMSL taxonomy BIBREF25. In BIBREF10, they used 43 categories of dialogue acts, which is different from us and previous work. The difference in the number of labels is mainly due to the special label “+”, which represents that the utterance is interrupted by the other speaker (and thus split into two or more parts). We used the same processing with BIBREF26, which concatenated the parts of an interrupted utterance together, giving the result the tag of the first part and putting it in its place in the conversation sequence. It is critical for fair comparison because there are nearly 8% data has the label “+”. Lacking standard splits, we followed the training/validation/test splits by BIBREF14. DailyDialog dataset contains 13118 multi-turn dialogues, which mainly reflect our daily communication style. It covers various topics about our daily life. Each utterance in the conversation is manually labeled as one out of 4 dialogue act classes. Table TABREF18 presents the statistics for both datasets. In our preprocessing, the text was lowercased before tokenized, and then sentences were tokenized by WordPiece tokenizer BIBREF27 with a 30,000 token vocabulary to alleviate the Out-of-Vocabulary problem.
[1]The author claimed that they achieved 78.7%(81.3%) accuracy with pre-trained word embedding (fine-grained embedding). For a fair comparison, both previous and our work is simply based on pre-trained word embedding. [2]The author randomly selected two test sets which are different from previous and our work and achieved 77.15% and 79.74%, and we reimplemented in standard test sets.
Experiments ::: Results on SwDA
In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. It is worth noting that BIBREF10 combined GloVeBIBREF28 and pre-trained ELMo representationsBIBREF29 as word embeddings. However, in our work, we only applied the pre-trained word embedding. To illustrate the importance of context information, we also evaluate several sentence classification methods (CNN, LSTM, BERT) as baselines. For baseline models, both CNN and LSTM, got similar accuracy (75.27% and 75.59% respectively). We also fine-tuned BERT BIBREF30 to do recognition based on single utterance. As seen, with the powerful unsupervised pre-trained language model, BERT (76.88% accuracy) outperformed LSTM and CNN models for single sentence classification. However, it was still much lower than the models based on context information. It indicates that context information is crucial in the DA recognition task. BERT can boost performance in a large margin. However, it costs too much time and resources. In this reason, we chose LSTM as our utterance encoder in further experiment.
By modeling context information, the performance of the hierarchical model is improved by at least 3%, even compared to BERT. In order to better analyze the semantic dependency learned by attention, in our experiments, we removed the CRF module. In terms of different hierarchical models, our LSTM+BLSTM achieved good result. The accuracy was 80.00% which is even a little better than Hierarchical BLSTM-CRF BIBREF9. Relying on attention mechanism and local contextual modeling, our model, LSTM+Attention and LSTM+Local Contextual Attention, achieved 80.12% and 80.34% accuracy respectively. Compared with the previous best approach Hierarchical BLSTM-CRF, we can obtain a relative accuracy gain with 1.1% by our best model. It indicated that self-attention model can capture context dependency better than the BLSTM model. With adding the local constraint, we can get an even better result.
To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result. It is worth noting that it is actually the same as single sentence classification when $P = 0$ (without any context provided). First, we set $W$ to 1 to discuss how the length of context padding will affect. As seen in the result, the accuracy increased when more context padding was used for both LSTM+BLSTM and LSTM+Attention approaches, so we did not evaluate the performance of LSTM+LC Attention when context padding is small. There was no further accuracy improvement when the length of context padding was beyond 5. Therefore, we fixed the context padding length $P$ to 5 and increased the size of the sliding window to see how it works. With sliding window size increasing, the more context was involved together with more unnecessary information. From the experiments, we can see that both LSTM+BLSTM and LSTM+Attention achieved the best performance when window size was 1 and context padding length was 5. When window size increased, the performances of these two models dropped. However, our model (LSTM+LC Attention) can leverage the context information more efficiently, which achieved the best performance when window size was 10, and the model was more stable and robust to the different setting of window size.
For online prediction, we only care about the recognition result of the last utterance in the given context. We added 5 preceding utterances as context padding for every predicted utterance because we cannot access subsequent utterances in the online setting. As seen in Table TABREF22, without subsequent utterances, the performances of these three models dropped. However, LSTM+LC Attention still outperformed the other two models.
Experiments ::: Result on DailyDialog
The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models. From table TABREF18 we can see that, the average dialogue length $|U|$ in DailyDialog is much shorter than the average length of SwDA. So, in our experiment, we set the maximum of the $W$ to 10, which almost covers the whole utterances in the dialogue. Using the same way as SwDA dataset, we, first, set W to 1 and increased the length of context padding. As seen, modeling local context information, hierarchical models yielded significant improvement than sentence classification. There was no further accuracy improvement when the length of context padding was beyond 2, so we fixed the context padding length P to 2 and increased the size of sliding window size W. From the experiments, we can see that LSTM+Attention always got a little better accuracy than LSTM+BLSTM. With window size increasing, the performances of these two models dropped. Relying on modeling local contextual information, LSTM+LC Attention achieved the best accuracy (85.81%) when the window size was 5. For the longer sliding window, the performance of LSTM+LC Attention was still better and more robust than the other two models. For online prediction, we added 2 preceding utterances as context padding, and the experiment shows that LSTM+LC Attention outperformed the other two models under the online setting, although the performances of these three models dropped without subsequent utterances.
Experiments ::: Visualization
In this section, we visualize the attention weights for analyzing how local contextual attention works in detail. Figure FIGREF24 shows the visualization of original attention and local contextual attention for the example dialogue shown in Table TABREF1. The attention matrix $M$ explicitly measures the dependency among utterances. Each row of grids is normalized by $softmax$, $M_{ij}$ represents for the dependency score between the utterance i and utterance j. As demonstrated in Figure FIGREF24, there are some wrong and uninterpretable attention weights annotated with red color, which is learned by the original attention. The original attention model gives the utterance “B: Hi” (position 0) and “A: Okay.” (position 7) a high dependency score. However, local contextual attention weakens its attention weights due to the long distance apart.
Overall, the additional Gaussian bias trend to centralize the attention distribution to the diagonal of the matrix, which is in line with our linguistic intuition that utterances that are far apart usually don't have too strong dependencies. As demonstrated in Figure FIGREF24, benefiting of the additional Gaussian bias, the revised attention mechanism weakens the attention weights between utterances which cross the long relative distance. For the grids near diagonal, it strengthens their dependency score and doesn't bring other useless dependencies for its learnable magnitude.
Conclusions and Future Work
In the paper, we propose our hierarchical model with local contextual attention to the Dialogue Act Recognition task. Our model can explicitly capture the semantic dependencies between utterances inside the dialogue. To enhance our model with local contextual information, we revise the attention distribution by a learnable Gaussian bias to make it focus on the local neighbors. Based on our dialog segmentation mechanism, we find that local contextual attention reduces the noises through relative position information, which is essential for dialogue act recognition. And this segmentation mechanism can be applied under online and offline settings. Our model achieves promising performance in two well-known datasets, which shows that modeling local contextual information is crucial for dialogue act recognition.
There is a close relation between dialogue act recognition and discourse parsing BIBREF31. The most discourse parsing process is composed of two stages: structure construction and dependency labeling BIBREF32, BIBREF33. For future work, a promising direction is to apply our method to multi-task training with two stages jointly. Incorporating supervised information from dependency between utterances may enhance the self-attention and further improve the accuracy of dialogue act recognition. | Unanswerable |
479d334b79c1eae3f2aa3701d28aa0d8dd46036a | 479d334b79c1eae3f2aa3701d28aa0d8dd46036a_0 | Q: Does the performance necessarily drop when more control is desired?
Text: Introduction
Many text generation tasks, e.g., data-to-text, summarization and image captioning, can be naturally divided into two steps: content selection and surface realization. The generations are supposed to have two levels of diversity: (1) content-level diversity reflecting multiple possibilities of content selection (what to say) and (2) surface-level diversity reflecting the linguistic variations of verbalizing the selected contents (how to say) BIBREF0 , BIBREF1 . Recent neural network models handle these tasks with the encoder-decoder (Enc-Dec) framework BIBREF2 , BIBREF3 , which simultaneously performs selecting and verbalizing in a black-box way. Therefore, both levels of diversity are entangled within the generation. This entanglement, however, sacrifices the controllability and interpretability, making it diffifcult to specify the content to be conveyed in the generated text BIBREF4 , BIBREF5 .
With this in mind, this paper proposes decoupling content selection from the Enc-Dec framework to allow finer-grained control over the generation. Table TABREF2 shows an example. We can easily modify the content selection to generate text with various focuses, or sample multiple paraphrases by fixing the content selection.
Though there has been much work dealing with content selection for the Enc-Dec, none of them is able to address the above concerns properly. Current methods can be categorized into the following three classes and have different limits:
In this paper, we treat the content selection as latent variables and train with amortized variational inference BIBREF10 , BIBREF11 . This provides a lower training variance than Reinforce-select. The selector and generator are co-trained within the same objective, the generations are thus more faithful to the selected contents than Bottom-up methods. Our model is task-agnostic, end-to-end trainable and can be seamlessly inserted into any encoder-decoder architecture. On both the data-to-text and headline generation task, we show our model outperforms others regarding content-level diversity and controllability while maintaining comparable performance. The performance/controllability trade-off can be effectively adjusted by adjusting a single hyperparameter in the training stage, which constrains an upper bound of the conditional mutual information (CMI) between the selector and generated text BIBREF12 , BIBREF13 . A higher CMI leads to stronger controllability with a bit more risk of text disfluency.
In summary, our contributions are (1) systematically studying the problem of controllable content selection for Enc-Dec text generation, (2) proposing a task-agnostic training framework achieving promising results and (3) introducing an effective way to achieve the trade-off between performance and controllability.
Background and Notation
Let INLINEFORM0 denote a source-target pair. INLINEFORM1 is a sequence of INLINEFORM2 and can be either some structured data or unstructured text/image depending on the task. INLINEFORM3 corresponds to INLINEFORM4 which is a text description of INLINEFORM5 . The goal of text generation is to learn a distribution INLINEFORM6 to automatically generate proper text.
The Enc-Dec architecture handles this task with an encode-attend-decode process BIBREF3 , BIBREF14 . The encoder first encodes each INLINEFORM0 into a vector INLINEFORM1 . At each time step, the decoder pays attentions to some source embeddings and outputs the probability of the next token by INLINEFORM2 . INLINEFORM3 is a weighted average of source embeddings: DISPLAYFORM0
INLINEFORM0 is the hidden state of the decoder at time step INLINEFORM1 . INLINEFORM2 is a score function to compute the similarity between INLINEFORM3 and INLINEFORM4 BIBREF15 .
Content Selection
Our goal is to decouple the content selection from the decoder by introducing an extra content selector. We hope the content-level diversity can be fully captured by the content selector for a more interpretable and controllable generation process. Following BIBREF6 , BIBREF16 , we define content selection as a sequence labeling task. Let INLINEFORM0 denote a sequence of binary selection masks. INLINEFORM1 if INLINEFORM2 is selected and 0 otherwise. INLINEFORM3 is assumed to be independent from each other and is sampled from a bernoulli distribution INLINEFORM4 . INLINEFORM6 is the bernoulli parameter, which we estimate using a two-layer feedforward network on top of the source encoder. Text are generated by first sampling INLINEFORM7 from INLINEFORM8 to decide which content to cover, then decode with the conditional distribution INLINEFORM9 . The text is expected to faithfully convey all selected contents and drop unselected ones. Fig. FIGREF8 depicts this generation process. Note that the selection is based on the token-level context-aware embeddings INLINEFORM10 and will maintain information from the surrounding contexts. It encourages the decoder to stay faithful to the original information instead of simply fabricating random sentences by connecting the selected tokens.
For each source-target pair, the ground-truth selection mask is unknown, so training is challenging. In the following session, we discuss several training possibilities and introduce the proposed model in detail.
Bottom-up
The most intuitive way is training the content selector to target some heuristically extracted contents. For example, we can train the selector to select overlapped words between the source and target BIBREF6 , sentences with higher tf-idf scores BIBREF20 or identified image objects that appear in the caption BIBREF21 . A standard encoder-decoder model is independently trained. In the testing stage, the prediction of the content selector is used to hard-mask the attention vector to guide the text generation in a bottom-up way. Though easy to train, Bottom-up generation has the following two problems: (1) The heuristically extracted contents might be coarse and cannot reflect the variety of human languages and (2) The selector and decoder are independently trained towards different objectives thus might not adapt to each other well.
INLINEFORM0 as Latent Variable: Another way is to treat INLINEFORM1 as a latent variable and co-train selector and generator by maximizing the marginal data likelihood. By doing so, the selector has the potential to automatically explore optimal selecting strategies best fit for the corresponding generator component.
With this in mind. We design INLINEFORM0 by changing the original decoder in the following way: (1) We initialize hidden states of the decoder from a mean pooling over selected contents to inform the decoder which contents to cover and (2) Unselected contents will be prohibited from being attended to: DISPLAYFORM0
INLINEFORM0 is the initial decoder hidden state and MLP denotes multi-layer-perceptron.
Since computing the exact marginal likelihood INLINEFORM0 requires enumerating over all possible combinations of INLINEFORM1 (complexity INLINEFORM2 ), we need some way to efficiently estimate the likelihood.
Soft-Select
Soft-select falls back on a deterministic network to output the likelihood function's first-order Taylor series approximation expanded at INLINEFORM0 : INLINEFORM1
By moving the expectation into the decoding function, we can deterministically compute the likelihood by setting INLINEFORM0 , reducing complexity to INLINEFORM1 . Each attention weight will first be “soft-masked" by INLINEFORM2 before being passed to the decoder. soft-select is fully differentiable and can be easily trained by gradient descent. However, this soft-approximation is normally inaccurate, especially when INLINEFORM3 has a high entropy, which is common in one-to-many text generation tasks. The gap between INLINEFORM4 and INLINEFORM5 will be large BIBREF22 , BIBREF23 . In practice, this would lead to unrealistic generations when sampling INLINEFORM6 from the deterministically trained distribution.
Reinforce-Select
Reinforce-select (RS) BIBREF24 , BIBREF9 utilizes reinforcement learning to approximate the marginal likelihood. Specifically, it is trained to maximize a lower bound of the likelihood by applying the Jensen inequalily: DISPLAYFORM0
The gradient to INLINEFORM0 is approximated with Monte-Carlo sampling by applying the REINFORCE algorithm BIBREF25 , BIBREF26 . To speed up convergence, we pre-train the selector by some distant supervision, which is a common practice in reinforcement learning. REINFORCE is unbiased but has a high variance. Many research have proposed sophisticated techniques for variance reduction BIBREF11 , BIBREF27 , BIBREF28 . In text generation, the high-variance problem is aggravated because there exists multiple valid selections. Accurately estimating the likelihood becomes difficult. Another issue is its tendency to avoid stochasticity BIBREF29 , which we will show in Sec SECREF27 that it results in low content-level diversity.
Variational Reinforce-Select
We propose Variational Reinforce-Select (VRS) which applies variational inference BIBREF10 for variance reduction. Instead of directly integrating over INLINEFORM0 , it imposes a proposal distribution INLINEFORM1 for importance sampling. The marginal likelihood is lower bounded by: DISPLAYFORM0
By choosing a proper INLINEFORM0 , the bound will be improved and the variance can be largely reduced compared with REINFORCE. If INLINEFORM1 equals the posterior distribution INLINEFORM2 , the bound is tight and the variance would be zero BIBREF30 . We define INLINEFORM3 as a mean-field distribution parameterized by a set of global parameters INLINEFORM4 to approach the true posterior distribution. INLINEFORM5 , INLINEFORM6 and INLINEFORM7 are simultaneously trained by minimizing the last tine of Eq. EQREF15 . INLINEFORM8 also allows us to further perform posterior inference: Given an arbitrary text INLINEFORM9 for a source INLINEFORM10 , we can infer which source contents are included in INLINEFORM11 (An example is given in Appendix SECREF9 ).
In Eq. EQREF15 , the KL divergence term can be computed analytically. As for the independence assumption, it can be summed over each individual INLINEFORM0 . The likelihood term is differentiable to INLINEFORM1 but not to INLINEFORM2 , we estimate the gradient to INLINEFORM3 in Eq EQREF15 by applying the REINFORCE estimator: DISPLAYFORM0
INLINEFORM0 is the control variate BIBREF25 . The optimal INLINEFORM1 would be BIBREF31 : DISPLAYFORM0
which we set as a soft-select approximation: DISPLAYFORM0
We estimate Eq. EQREF16 with a single sample from INLINEFORM0 for efficiency. Though multiple-sample could potentially further tighten the bound and reduce the variance BIBREF32 , BIBREF33 , BIBREF34 , it brings significant computational overhead, especially in text generation tasks where the whole sentence needs to be decoded.
Degree of Controllability
In practice, when treating content selection as latent variables, the model tends to end up with a trivial solution of always selecting all source tokens BIBREF35 , BIBREF36 . This behavior is understandable since Eq. EQREF10 strictly masks unselected tokens. Wrongly unselecting one token will largely deteriorate the likelihood. Under the maximum likelihood (MLE) objective, this high risk pushes the selector to take a conservative strategy of always keeping all tokens, then the whole model degenerates to the standard Enc-Dec and the selection mask loses effects on the generation. Usually people apply a penalty term to the selecting ratio when optimizing the likelihood: DISPLAYFORM0
INLINEFORM0 is the MLE loss function, INLINEFORM1 is the mean of INLINEFORM2 and INLINEFORM3 is the target selecting ratio. This forces the selector to select the most important INLINEFORM4 tokens for each source input instead of keeping all of them.
In our VRS model, we can easily adjust the degree of controllability by limiting an upper bound of the conditional mutual information (CMI) INLINEFORM0 BIBREF13 . Specifically, we can change our objective into: DISPLAYFORM0
INLINEFORM0 is a fixed lagrangian multiplier. Eq. EQREF21 can be proved equal to maximum likelihood with the constraint INLINEFORM1 given proper INLINEFORM2 BIBREF12 . A higher INLINEFORM3 indicates INLINEFORM4 has more influences to INLINEFORM5 (higher controllability) while always safely selecting all tokens will lead INLINEFORM6 . It is preferred over Eq. EQREF20 because (a) CMI directly considers the dependency between the selection and multiple-possible text while limiting the ratio aims at finding the single most salient parts for each source. (b) Unlike CMI, limiting the ratio is coarse. It considers only the total selected size and ignores its internal distribution.
[tb] Variational Reinforce-Select (VRS) Parameters: INLINEFORM0 INLINEFORM1 TRUE Sample X,Y from the corpus; Encode X into INLINEFORM2 ;
INLINEFORM0 Update INLINEFORM1 with distant supervision;
Update INLINEFORM0 by INLINEFORM1 Eq. EQREF15 ; Update INLINEFORM2 by INLINEFORM3 Eq. EQREF21 ; INLINEFORM4 FALSE if Eq. EQREF15 degrades convergence and INLINEFORM5 is False
In practice, we can set INLINEFORM0 to adjust the degree of controllability we want. Later we will show it leads to a trade-off with performance. The final algorithm is detailed in Algorithm SECREF19 . To keep fairness, we trian RS and VRS with the same control variate and pre-training strategy.
Related Work
Most content selection models train the selector with heuristic rules BIBREF39 , BIBREF20 , BIBREF16 , BIBREF6 , BIBREF40 , BIBREF41 , which fail to fully capture the relation between selection and generation. BIBREF7 , BIBREF8 , BIBREF42 , BIBREF20 “soft-select" word or sentence embeddings based on a gating function. The output score from the gate is a deterministic vector without any probabilistic variations, so controlling the selection to generate diverse text is impossible. Very few works explicitly define a bernoulli distribution for the selector, then train with the REINFORCE algorithm BIBREF24 , BIBREF9 , but the selection targets at a high recall regardless of the low precision, so the controllability over generated text is weak. BIBREF43 control the generation by manually concatenating entity embeddings, while our model is much more flexible by explicitly defining the selection probability over all source tokens. Our work is closely related with learning discrete representations with variational inference BIBREF44 , BIBREF45 , BIBREF46 , BIBREF33 , where we treat content selection as the latent representation. Limiting the KL-term is a common technique to deal with the “posterior collapse" problem BIBREF47 , BIBREF48 , BIBREF49 . We adopt a similar approach and use it to further control the selecting strategy.
Experiments
For the experiments, we focus on comparing (1) Bottom-up generation (Bo.Up.), (2) soft-select (SS), (3) Reinforce-select (RS) and (4) Variational-Reinforce-select (VRS) regarding their performance on content selection. SS and RS are trained with the selecting ratio constraint in Eq. EQREF20 . For the SS model, we further add a regularization term to encourage the maximum value of INLINEFORM0 to be close to 1 as in BIBREF7 . We first briefly introduce the tasks and important setup, then present the evaluation results.
Tasks and Setup
We test content-selection models on the headline and data-to-text generation task. Both tasks share the same framework with the only difference of source-side encoders.
Headline Generation: We use English Gigaword preprocessed by BIBREF50 , which pairs first sentences of news articles with their headlines. We keep most settings same as in BIBREF8 , but use a vocabulary built by byte-pair-encoding BIBREF51 . We find it speeds up training with superior performance.
Data-to-Text Generation: We use the Wikibio dataset BIBREF52 . The source is a Wikipedia infobox and the target is a one-sentence biography description. Most settings are the same as in BIBREF53 , but we use a bi-LSTM encoder for better performance.
Heuristically extracted content: This is used to train the selector for bottom up models and pre-train the RS and VRS model. For wikibio, we simply extract overlapped words between the source and target. In Gigaword, as the headline is more abstractive, we select the closest source word for each target word in the embedding space. Stop words and punctuations are prohibited from being selected.
Choice of INLINEFORM0 : As seen in Sec SECREF19 , we need to set the hyperparameter INLINEFORM1 for RS/SS and INLINEFORM2 for VRS. INLINEFORM3 corresponds to the selecting ratio. We set them as INLINEFORM4 for Wikibio and INLINEFORM5 for Gigaword. The value is decided by running a human evaluation to get the empirical estimation. To keep comparison fairness, we tune INLINEFORM6 to make VRS select similar amount of tokens with RS. The values we get are INLINEFORM7 for Wikibio and INLINEFORM8 for Gigaword. INLINEFORM9 is the number of source tokens.
Results and Analysis
Ideally we would expect the learned content selector to (1) have reasonable diversity so that text with various contents can be easily sampled, (2) properly control the contents described in the generated text and (3) not hurt performance. The following section will evaluate these three points in order.
Diversity: We first look into the diversity of content selection learned by different models. For each test data, 50 selection masks are randomly sampled from the model's learned distribution. Greedy decoding is run to generate the text for each mask. We measure the entropy of the selector, proportion of unique selection masks and generated text in the 50 samples. We further define the “effect" of the selector as the ratio of sampled unique text and mask. This indicates how often changing the selection mask will also lead to a change in the generated text. The results are averaged over all test data. Following BIBREF50 and BIBREF52 , we measure the quality of generated text with ROUGE-1, 2, L F-score for Gigaword and ROUGE-4, BLEU-4, NIST for Wikibio. As there is only one reference text for each source, we report an oracle upper bound of these scores by assuming an “oracle" that can choose the best text among all the candidates BIBREF54 , BIBREF21 . Namely, out of each 50 sampled text, we pick the one with the maximum metric score. The final metric score is evaluated on these “oracle" picked samples. The intuition is that if the content selector is properly trained, at least one out of the 50 samples should describe similar contents with the reference text, the metric score between it and the reference text should be high. Table TABREF25 lists the results. We can have the following observations:
RS model completely fails to capture the content-level diversity. Its selector is largely deterministic, with a lowest entropy value among all models. In contrast, the selector from SS, VRS and Bo.Up. have reasonable diversity, with over INLINEFORM0 and INLINEFORM1 unique selection masks for Gigaword and Wikibio respectively.
The selector from VRS has the strongest effect to the generator, especially on the Gigaword data where modifying the content selection changes the corresponding text in more than 95% of the cases. RS has the lowest effect value, which indicates that even with the selecting ratio constraint, its generator still ignores the selection mask to a large extent.
The oracle metric score of VRS is much higher than the other two. This is beneficial when people want to apply the model to generate a few candidate text then hand-pick the suitable one. VRS has more potential than the other three to contain the expected text. SS performs worst. The gap between the soft approximation and the real distribution, as mentioned before, indeed results in a large drop of performance.
In short, compared with others, the content selector of VRS is (1) diverse, (2) has stronger effect on the text generation and (3) with a larger potential of producing an expected text.
Controllability: We have shown the content selector of VRS is diverse and has strong effect on the text generation. This section aims at examining whether such effect is desirable, i.e., whether the selector is able to properly control the contents described in the text. We measure it based on the self-bleu metric and a human evaluation.
The self-bleu metric measures the controllability by evaluating the “intra-selection" similarity of generated text. Intuitively, by fixing the selection mask, multiple text sampled from the decoder are expected to describe the same contents and thereby should be highly similar to each other. The decoder should only model surface-level diversity without further modifying the selected contents. With this in mind, for each test data, we randomly sample a selection mask from the selector's distribution, then fix the mask and run the decoder to sample 10 different text. The self-BLEU-1 score BIBREF55 on the sampled text is reported, which is the average BLEU score between each text pair. A higher self-BLEU score indicates the sampled text are more similar with each other. The results are shown in Table TABREF31 . We can see generations from VRS have a clearly higher intra-selection similarity. SS performs even worse than RS, despite having a high effect score in Table TABREF25 . The selector from SS affects the generation in an undesirable way, which also explain why SS has a lowest oracle metric score though with a high score on content diversity and effect.
We further run a human evaluation to measure the text-content consistency among different models. 100 source text are randomly sampled from the human-written DUC 2004 data for task 1&2 BIBREF56 . Bo.Up, SS, RS and VRS are applied to generate the target text by first sampling a selection mask, then run beam search decoding with beam size 10. We are interested in seeing (1) if multiple generations from the same selection mask are paraphrases to each other (intra-consistent) and (2) if generations from different selection masks do differ in the content they described (inter-diverse). The results in Table TABREF32 show that VRS significantly outperforms the other two in both intra-consistency and inter-diversity. RS has the lowest score on both because the selector has very weak effects on the generation as measured in the last section. Bo.Up and SS lay between them. Overall VRS is able to maintain the highest content-text consistency among them.
Performance INLINEFORM0 Trade-off: To see if the selector affects performance, we also ask human annotators to judge the text fluency. The fluency score is computed as the average number of text being judged as fluent. We include generations from the standard Enc-Dec model. Table TABREF32 shows the best fluency is achieved for Enc-Dec. Imposing a content selector always affects the fluency a bit. The main reason is that when the controllability is strong, the change of selection will directly affect the text realization so that a tiny error of content selection might lead to unrealistic text. If the selector is not perfectly trained, the fluency will inevitably be influenced. When the controllability is weaker, like in RS, the fluency is more stable because it will not be affected much by the selection mask. For SS and Bo.Up, the drop of fluency is significant because of the gap of soft approximation and the independent training procedure. In general, VRS does properly decouple content selection from the enc-dec architecture, with only tiny degrade on the fluency.
Table TABREF33 / TABREF34 further measure the metric scores on Gigaword/Wikibio by decoding text from the best selection mask based on the selector's distribution (set INLINEFORM0 if INLINEFORM1 and 0 otherwise). We include results from VRS model with INLINEFORM2 , which puts no constraint on the mutual information. We further report the score by generating the best selection mask from the learned posterior distribution INLINEFORM3 for VRS model. Two current SOTA results from BIBREF8 and BIBREF53 and the proportion of selected source words for each model are also included. We have the following observations:
As the value of INLINEFORM0 decreases, the performance of VRS improves, but the selector loses more controllability because the model tends to over-select contents (over INLINEFORM1 source words selected). The text-content consistency will become low.
Increasing INLINEFORM0 sacrifices a bit performance, but still comparable with SOTA. Especially on Wikibio where the performance drop is minor. The reason should be that Wikibio is relatively easier to predict the selection but Gigaword has more uncertainty.
Increasing INLINEFORM0 improves the accuracy of the posterior selection. This would be useful when we want to perform posterior inference for some source-target pair.
Setting INLINEFORM0 can actually outperform SOTA seq2seq which keeps all tokens, suggesting it is still beneficial to use the VRS model even if we do not care about the controllability.
Figure FIGREF39 visualizes how changing the value of INLINEFORM0 affects the negative log likelihood (NLL), entropy of the selector and self-bleu score, which roughly correlates with performance, diversity and controllability. NLL is evaluated based on the lower bound in Eq EQREF15 BIBREF57 . We can see as INLINEFORM1 increases, the performance decreases gradually but the content selection gains more diversity and controllability. In practice we can tune the INLINEFORM2 value to achieve a trade-off.
Generation Example: Figure FIGREF40 shows some examples from Gigaword. As can be seen, decodings from the VRS model are largely consistent with each other, in most cases only replacing one or two words with corresponding synonyms. Samples are able to faithfully convey all selected contents. In contrast, generations from SS. Bo.Up. and RS are unpreditable, differing in both selected contents and also the way of saying. SS and Bo.Up also suffer more from the text disfluency. The generations from them are largely uncertain.
Conclusion
In this paper, we tackle the unaddressed problem of controllable content selection in text generation. We propose a general framework based on variational inference that can be potentiall applied to arbitrary tasks. On both the headline generation and data-to-text tasks, our model outperforms state-of-the-art models regarding the diversity and controllability of content selection. We further introduce an effective way to achieve a performance/controllability trade-off, which can be easily tuned to meet specific requirement.
Acknowledgments
We thank anonymous reviewers for valuable comments, thank Aditya Mogadala, Shun Kiyono, Thomas Mclachlan and other members of the LIAT team at RIKEN AIP for useful discussions. Xiaoyu Shen is supported by IMPRS-CS fellowship. The work of J. Suzuki was partly supported by JSPS KAKENHI Grant Number JP19104418 and AIRPF Grant Number 30AI036-8. This work is also partially funded by DFG collaborative research center SFB 1102.
Performance/Controllability trade-off
The trade-off between performance and interpretability has been a long-standing problem in feature selection BIBREF60 , BIBREF59 . The trade-off exists because it is usually very difficult to accurately find the exact features needed to make the prediction. Safely keeping more features will almost always lead to better performance. Some models do succeed in achieving superior performance by selecting only a subset of the input. However, they mostly still target at the recall of the selection BIBREF39 , BIBREF9 , BIBREF35 , i.e., to select all possible content that might help predict the target. The final selected contents reduce some most useful information from the source, but they still contain many redundant contents (same like our VRS-( INLINEFORM0 ) as in Table TABREF34 and TABREF33 ). This makes them unsuitable for controllable content selection. In text generation, a recent work from BIBREF41 shows they could control the contents by integrating a symbolic selector into the neural network. However, their selector is tailored by some rules only for the RDF triples. Moreover, even based on their fine-tuned selector, the fluency they observe is still slightly worse than a standard seq2seq.
We assume the content selector is the major bottle if we want a model that can achieve controllability without sacrificing the performance. We can clearly observe in Table TABREF34 that the performance drop in Wikibio is marginal compared with Gigaword. The reason should be that the selection on Wikibio is much easier than Gigaword. The biography of a person almost always follow some simple patterns, like name, birthday and profession, but for news headlines, it can contain information with various focuses. In our two tasks, due to the independence assumption we made on INLINEFORM0 and the model capacity limit, the content selector cannot fully fit the true selecting distribution, so the trade-off is necessary. Improving the selector with SOTA sequence labelling models like Bert BIBREF17 would be worth trying.
There are also other ways to improve. For example, we could learn a ranker to help us choose the best contents BIBREF63 . Or we could manually define some matching rules to help rank the selection BIBREF58 . In Table TABREF25 , we show the VRS model achieves very high metric scores based on an oracle ranker, so learning a ranker should be able to improve the performance straightforwardly.
Example from Wikibio
To see how we can manually control the content selection, Figure FIGREF42 shows an example from Wikibio, the model is mostly able to form a proper sentence covering all selected information. If the selector assigns very high probability to select some content and we force to remove it, the resulting text could be unnatual (as in summary 4 in Figure FIGREF42 because the model has seen very few text without containing the birthday information in the training corpus). However, thanks to the diversity of the content selector as shown in the previous section, it is able to handle most combinatorial patterns of content selection.
Posterior inference
Figure FIGREF41 further provides an example of how we can perform posterior inference given a provided text. Our model is able to infer which source contents are covered in the given summary. With the inferred selection, we can sample multiple paraphrases describing the same contents. As seen in Table TABREF34 and TABREF33 , the metric scores are remarkably high when decoding from the posterior inferred selections (last three rows), suggesting the posterior distribution is well trained. The posterior inference part could be beneficial for other tasks like content transfer among text BIBREF38 , BIBREF62 . The described source contents can be first predicted with the posterior inference, then transferred to a new text. | Yes |
b02d2d351bd2e49d4d59db0a8a6ef23cb90bfbc4 | b02d2d351bd2e49d4d59db0a8a6ef23cb90bfbc4_0 | Q: How does the model perform in comparison to end-to-end headline generation models?
Text: Introduction
Many text generation tasks, e.g., data-to-text, summarization and image captioning, can be naturally divided into two steps: content selection and surface realization. The generations are supposed to have two levels of diversity: (1) content-level diversity reflecting multiple possibilities of content selection (what to say) and (2) surface-level diversity reflecting the linguistic variations of verbalizing the selected contents (how to say) BIBREF0 , BIBREF1 . Recent neural network models handle these tasks with the encoder-decoder (Enc-Dec) framework BIBREF2 , BIBREF3 , which simultaneously performs selecting and verbalizing in a black-box way. Therefore, both levels of diversity are entangled within the generation. This entanglement, however, sacrifices the controllability and interpretability, making it diffifcult to specify the content to be conveyed in the generated text BIBREF4 , BIBREF5 .
With this in mind, this paper proposes decoupling content selection from the Enc-Dec framework to allow finer-grained control over the generation. Table TABREF2 shows an example. We can easily modify the content selection to generate text with various focuses, or sample multiple paraphrases by fixing the content selection.
Though there has been much work dealing with content selection for the Enc-Dec, none of them is able to address the above concerns properly. Current methods can be categorized into the following three classes and have different limits:
In this paper, we treat the content selection as latent variables and train with amortized variational inference BIBREF10 , BIBREF11 . This provides a lower training variance than Reinforce-select. The selector and generator are co-trained within the same objective, the generations are thus more faithful to the selected contents than Bottom-up methods. Our model is task-agnostic, end-to-end trainable and can be seamlessly inserted into any encoder-decoder architecture. On both the data-to-text and headline generation task, we show our model outperforms others regarding content-level diversity and controllability while maintaining comparable performance. The performance/controllability trade-off can be effectively adjusted by adjusting a single hyperparameter in the training stage, which constrains an upper bound of the conditional mutual information (CMI) between the selector and generated text BIBREF12 , BIBREF13 . A higher CMI leads to stronger controllability with a bit more risk of text disfluency.
In summary, our contributions are (1) systematically studying the problem of controllable content selection for Enc-Dec text generation, (2) proposing a task-agnostic training framework achieving promising results and (3) introducing an effective way to achieve the trade-off between performance and controllability.
Background and Notation
Let INLINEFORM0 denote a source-target pair. INLINEFORM1 is a sequence of INLINEFORM2 and can be either some structured data or unstructured text/image depending on the task. INLINEFORM3 corresponds to INLINEFORM4 which is a text description of INLINEFORM5 . The goal of text generation is to learn a distribution INLINEFORM6 to automatically generate proper text.
The Enc-Dec architecture handles this task with an encode-attend-decode process BIBREF3 , BIBREF14 . The encoder first encodes each INLINEFORM0 into a vector INLINEFORM1 . At each time step, the decoder pays attentions to some source embeddings and outputs the probability of the next token by INLINEFORM2 . INLINEFORM3 is a weighted average of source embeddings: DISPLAYFORM0
INLINEFORM0 is the hidden state of the decoder at time step INLINEFORM1 . INLINEFORM2 is a score function to compute the similarity between INLINEFORM3 and INLINEFORM4 BIBREF15 .
Content Selection
Our goal is to decouple the content selection from the decoder by introducing an extra content selector. We hope the content-level diversity can be fully captured by the content selector for a more interpretable and controllable generation process. Following BIBREF6 , BIBREF16 , we define content selection as a sequence labeling task. Let INLINEFORM0 denote a sequence of binary selection masks. INLINEFORM1 if INLINEFORM2 is selected and 0 otherwise. INLINEFORM3 is assumed to be independent from each other and is sampled from a bernoulli distribution INLINEFORM4 . INLINEFORM6 is the bernoulli parameter, which we estimate using a two-layer feedforward network on top of the source encoder. Text are generated by first sampling INLINEFORM7 from INLINEFORM8 to decide which content to cover, then decode with the conditional distribution INLINEFORM9 . The text is expected to faithfully convey all selected contents and drop unselected ones. Fig. FIGREF8 depicts this generation process. Note that the selection is based on the token-level context-aware embeddings INLINEFORM10 and will maintain information from the surrounding contexts. It encourages the decoder to stay faithful to the original information instead of simply fabricating random sentences by connecting the selected tokens.
For each source-target pair, the ground-truth selection mask is unknown, so training is challenging. In the following session, we discuss several training possibilities and introduce the proposed model in detail.
Bottom-up
The most intuitive way is training the content selector to target some heuristically extracted contents. For example, we can train the selector to select overlapped words between the source and target BIBREF6 , sentences with higher tf-idf scores BIBREF20 or identified image objects that appear in the caption BIBREF21 . A standard encoder-decoder model is independently trained. In the testing stage, the prediction of the content selector is used to hard-mask the attention vector to guide the text generation in a bottom-up way. Though easy to train, Bottom-up generation has the following two problems: (1) The heuristically extracted contents might be coarse and cannot reflect the variety of human languages and (2) The selector and decoder are independently trained towards different objectives thus might not adapt to each other well.
INLINEFORM0 as Latent Variable: Another way is to treat INLINEFORM1 as a latent variable and co-train selector and generator by maximizing the marginal data likelihood. By doing so, the selector has the potential to automatically explore optimal selecting strategies best fit for the corresponding generator component.
With this in mind. We design INLINEFORM0 by changing the original decoder in the following way: (1) We initialize hidden states of the decoder from a mean pooling over selected contents to inform the decoder which contents to cover and (2) Unselected contents will be prohibited from being attended to: DISPLAYFORM0
INLINEFORM0 is the initial decoder hidden state and MLP denotes multi-layer-perceptron.
Since computing the exact marginal likelihood INLINEFORM0 requires enumerating over all possible combinations of INLINEFORM1 (complexity INLINEFORM2 ), we need some way to efficiently estimate the likelihood.
Soft-Select
Soft-select falls back on a deterministic network to output the likelihood function's first-order Taylor series approximation expanded at INLINEFORM0 : INLINEFORM1
By moving the expectation into the decoding function, we can deterministically compute the likelihood by setting INLINEFORM0 , reducing complexity to INLINEFORM1 . Each attention weight will first be “soft-masked" by INLINEFORM2 before being passed to the decoder. soft-select is fully differentiable and can be easily trained by gradient descent. However, this soft-approximation is normally inaccurate, especially when INLINEFORM3 has a high entropy, which is common in one-to-many text generation tasks. The gap between INLINEFORM4 and INLINEFORM5 will be large BIBREF22 , BIBREF23 . In practice, this would lead to unrealistic generations when sampling INLINEFORM6 from the deterministically trained distribution.
Reinforce-Select
Reinforce-select (RS) BIBREF24 , BIBREF9 utilizes reinforcement learning to approximate the marginal likelihood. Specifically, it is trained to maximize a lower bound of the likelihood by applying the Jensen inequalily: DISPLAYFORM0
The gradient to INLINEFORM0 is approximated with Monte-Carlo sampling by applying the REINFORCE algorithm BIBREF25 , BIBREF26 . To speed up convergence, we pre-train the selector by some distant supervision, which is a common practice in reinforcement learning. REINFORCE is unbiased but has a high variance. Many research have proposed sophisticated techniques for variance reduction BIBREF11 , BIBREF27 , BIBREF28 . In text generation, the high-variance problem is aggravated because there exists multiple valid selections. Accurately estimating the likelihood becomes difficult. Another issue is its tendency to avoid stochasticity BIBREF29 , which we will show in Sec SECREF27 that it results in low content-level diversity.
Variational Reinforce-Select
We propose Variational Reinforce-Select (VRS) which applies variational inference BIBREF10 for variance reduction. Instead of directly integrating over INLINEFORM0 , it imposes a proposal distribution INLINEFORM1 for importance sampling. The marginal likelihood is lower bounded by: DISPLAYFORM0
By choosing a proper INLINEFORM0 , the bound will be improved and the variance can be largely reduced compared with REINFORCE. If INLINEFORM1 equals the posterior distribution INLINEFORM2 , the bound is tight and the variance would be zero BIBREF30 . We define INLINEFORM3 as a mean-field distribution parameterized by a set of global parameters INLINEFORM4 to approach the true posterior distribution. INLINEFORM5 , INLINEFORM6 and INLINEFORM7 are simultaneously trained by minimizing the last tine of Eq. EQREF15 . INLINEFORM8 also allows us to further perform posterior inference: Given an arbitrary text INLINEFORM9 for a source INLINEFORM10 , we can infer which source contents are included in INLINEFORM11 (An example is given in Appendix SECREF9 ).
In Eq. EQREF15 , the KL divergence term can be computed analytically. As for the independence assumption, it can be summed over each individual INLINEFORM0 . The likelihood term is differentiable to INLINEFORM1 but not to INLINEFORM2 , we estimate the gradient to INLINEFORM3 in Eq EQREF15 by applying the REINFORCE estimator: DISPLAYFORM0
INLINEFORM0 is the control variate BIBREF25 . The optimal INLINEFORM1 would be BIBREF31 : DISPLAYFORM0
which we set as a soft-select approximation: DISPLAYFORM0
We estimate Eq. EQREF16 with a single sample from INLINEFORM0 for efficiency. Though multiple-sample could potentially further tighten the bound and reduce the variance BIBREF32 , BIBREF33 , BIBREF34 , it brings significant computational overhead, especially in text generation tasks where the whole sentence needs to be decoded.
Degree of Controllability
In practice, when treating content selection as latent variables, the model tends to end up with a trivial solution of always selecting all source tokens BIBREF35 , BIBREF36 . This behavior is understandable since Eq. EQREF10 strictly masks unselected tokens. Wrongly unselecting one token will largely deteriorate the likelihood. Under the maximum likelihood (MLE) objective, this high risk pushes the selector to take a conservative strategy of always keeping all tokens, then the whole model degenerates to the standard Enc-Dec and the selection mask loses effects on the generation. Usually people apply a penalty term to the selecting ratio when optimizing the likelihood: DISPLAYFORM0
INLINEFORM0 is the MLE loss function, INLINEFORM1 is the mean of INLINEFORM2 and INLINEFORM3 is the target selecting ratio. This forces the selector to select the most important INLINEFORM4 tokens for each source input instead of keeping all of them.
In our VRS model, we can easily adjust the degree of controllability by limiting an upper bound of the conditional mutual information (CMI) INLINEFORM0 BIBREF13 . Specifically, we can change our objective into: DISPLAYFORM0
INLINEFORM0 is a fixed lagrangian multiplier. Eq. EQREF21 can be proved equal to maximum likelihood with the constraint INLINEFORM1 given proper INLINEFORM2 BIBREF12 . A higher INLINEFORM3 indicates INLINEFORM4 has more influences to INLINEFORM5 (higher controllability) while always safely selecting all tokens will lead INLINEFORM6 . It is preferred over Eq. EQREF20 because (a) CMI directly considers the dependency between the selection and multiple-possible text while limiting the ratio aims at finding the single most salient parts for each source. (b) Unlike CMI, limiting the ratio is coarse. It considers only the total selected size and ignores its internal distribution.
[tb] Variational Reinforce-Select (VRS) Parameters: INLINEFORM0 INLINEFORM1 TRUE Sample X,Y from the corpus; Encode X into INLINEFORM2 ;
INLINEFORM0 Update INLINEFORM1 with distant supervision;
Update INLINEFORM0 by INLINEFORM1 Eq. EQREF15 ; Update INLINEFORM2 by INLINEFORM3 Eq. EQREF21 ; INLINEFORM4 FALSE if Eq. EQREF15 degrades convergence and INLINEFORM5 is False
In practice, we can set INLINEFORM0 to adjust the degree of controllability we want. Later we will show it leads to a trade-off with performance. The final algorithm is detailed in Algorithm SECREF19 . To keep fairness, we trian RS and VRS with the same control variate and pre-training strategy.
Related Work
Most content selection models train the selector with heuristic rules BIBREF39 , BIBREF20 , BIBREF16 , BIBREF6 , BIBREF40 , BIBREF41 , which fail to fully capture the relation between selection and generation. BIBREF7 , BIBREF8 , BIBREF42 , BIBREF20 “soft-select" word or sentence embeddings based on a gating function. The output score from the gate is a deterministic vector without any probabilistic variations, so controlling the selection to generate diverse text is impossible. Very few works explicitly define a bernoulli distribution for the selector, then train with the REINFORCE algorithm BIBREF24 , BIBREF9 , but the selection targets at a high recall regardless of the low precision, so the controllability over generated text is weak. BIBREF43 control the generation by manually concatenating entity embeddings, while our model is much more flexible by explicitly defining the selection probability over all source tokens. Our work is closely related with learning discrete representations with variational inference BIBREF44 , BIBREF45 , BIBREF46 , BIBREF33 , where we treat content selection as the latent representation. Limiting the KL-term is a common technique to deal with the “posterior collapse" problem BIBREF47 , BIBREF48 , BIBREF49 . We adopt a similar approach and use it to further control the selecting strategy.
Experiments
For the experiments, we focus on comparing (1) Bottom-up generation (Bo.Up.), (2) soft-select (SS), (3) Reinforce-select (RS) and (4) Variational-Reinforce-select (VRS) regarding their performance on content selection. SS and RS are trained with the selecting ratio constraint in Eq. EQREF20 . For the SS model, we further add a regularization term to encourage the maximum value of INLINEFORM0 to be close to 1 as in BIBREF7 . We first briefly introduce the tasks and important setup, then present the evaluation results.
Tasks and Setup
We test content-selection models on the headline and data-to-text generation task. Both tasks share the same framework with the only difference of source-side encoders.
Headline Generation: We use English Gigaword preprocessed by BIBREF50 , which pairs first sentences of news articles with their headlines. We keep most settings same as in BIBREF8 , but use a vocabulary built by byte-pair-encoding BIBREF51 . We find it speeds up training with superior performance.
Data-to-Text Generation: We use the Wikibio dataset BIBREF52 . The source is a Wikipedia infobox and the target is a one-sentence biography description. Most settings are the same as in BIBREF53 , but we use a bi-LSTM encoder for better performance.
Heuristically extracted content: This is used to train the selector for bottom up models and pre-train the RS and VRS model. For wikibio, we simply extract overlapped words between the source and target. In Gigaword, as the headline is more abstractive, we select the closest source word for each target word in the embedding space. Stop words and punctuations are prohibited from being selected.
Choice of INLINEFORM0 : As seen in Sec SECREF19 , we need to set the hyperparameter INLINEFORM1 for RS/SS and INLINEFORM2 for VRS. INLINEFORM3 corresponds to the selecting ratio. We set them as INLINEFORM4 for Wikibio and INLINEFORM5 for Gigaword. The value is decided by running a human evaluation to get the empirical estimation. To keep comparison fairness, we tune INLINEFORM6 to make VRS select similar amount of tokens with RS. The values we get are INLINEFORM7 for Wikibio and INLINEFORM8 for Gigaword. INLINEFORM9 is the number of source tokens.
Results and Analysis
Ideally we would expect the learned content selector to (1) have reasonable diversity so that text with various contents can be easily sampled, (2) properly control the contents described in the generated text and (3) not hurt performance. The following section will evaluate these three points in order.
Diversity: We first look into the diversity of content selection learned by different models. For each test data, 50 selection masks are randomly sampled from the model's learned distribution. Greedy decoding is run to generate the text for each mask. We measure the entropy of the selector, proportion of unique selection masks and generated text in the 50 samples. We further define the “effect" of the selector as the ratio of sampled unique text and mask. This indicates how often changing the selection mask will also lead to a change in the generated text. The results are averaged over all test data. Following BIBREF50 and BIBREF52 , we measure the quality of generated text with ROUGE-1, 2, L F-score for Gigaword and ROUGE-4, BLEU-4, NIST for Wikibio. As there is only one reference text for each source, we report an oracle upper bound of these scores by assuming an “oracle" that can choose the best text among all the candidates BIBREF54 , BIBREF21 . Namely, out of each 50 sampled text, we pick the one with the maximum metric score. The final metric score is evaluated on these “oracle" picked samples. The intuition is that if the content selector is properly trained, at least one out of the 50 samples should describe similar contents with the reference text, the metric score between it and the reference text should be high. Table TABREF25 lists the results. We can have the following observations:
RS model completely fails to capture the content-level diversity. Its selector is largely deterministic, with a lowest entropy value among all models. In contrast, the selector from SS, VRS and Bo.Up. have reasonable diversity, with over INLINEFORM0 and INLINEFORM1 unique selection masks for Gigaword and Wikibio respectively.
The selector from VRS has the strongest effect to the generator, especially on the Gigaword data where modifying the content selection changes the corresponding text in more than 95% of the cases. RS has the lowest effect value, which indicates that even with the selecting ratio constraint, its generator still ignores the selection mask to a large extent.
The oracle metric score of VRS is much higher than the other two. This is beneficial when people want to apply the model to generate a few candidate text then hand-pick the suitable one. VRS has more potential than the other three to contain the expected text. SS performs worst. The gap between the soft approximation and the real distribution, as mentioned before, indeed results in a large drop of performance.
In short, compared with others, the content selector of VRS is (1) diverse, (2) has stronger effect on the text generation and (3) with a larger potential of producing an expected text.
Controllability: We have shown the content selector of VRS is diverse and has strong effect on the text generation. This section aims at examining whether such effect is desirable, i.e., whether the selector is able to properly control the contents described in the text. We measure it based on the self-bleu metric and a human evaluation.
The self-bleu metric measures the controllability by evaluating the “intra-selection" similarity of generated text. Intuitively, by fixing the selection mask, multiple text sampled from the decoder are expected to describe the same contents and thereby should be highly similar to each other. The decoder should only model surface-level diversity without further modifying the selected contents. With this in mind, for each test data, we randomly sample a selection mask from the selector's distribution, then fix the mask and run the decoder to sample 10 different text. The self-BLEU-1 score BIBREF55 on the sampled text is reported, which is the average BLEU score between each text pair. A higher self-BLEU score indicates the sampled text are more similar with each other. The results are shown in Table TABREF31 . We can see generations from VRS have a clearly higher intra-selection similarity. SS performs even worse than RS, despite having a high effect score in Table TABREF25 . The selector from SS affects the generation in an undesirable way, which also explain why SS has a lowest oracle metric score though with a high score on content diversity and effect.
We further run a human evaluation to measure the text-content consistency among different models. 100 source text are randomly sampled from the human-written DUC 2004 data for task 1&2 BIBREF56 . Bo.Up, SS, RS and VRS are applied to generate the target text by first sampling a selection mask, then run beam search decoding with beam size 10. We are interested in seeing (1) if multiple generations from the same selection mask are paraphrases to each other (intra-consistent) and (2) if generations from different selection masks do differ in the content they described (inter-diverse). The results in Table TABREF32 show that VRS significantly outperforms the other two in both intra-consistency and inter-diversity. RS has the lowest score on both because the selector has very weak effects on the generation as measured in the last section. Bo.Up and SS lay between them. Overall VRS is able to maintain the highest content-text consistency among them.
Performance INLINEFORM0 Trade-off: To see if the selector affects performance, we also ask human annotators to judge the text fluency. The fluency score is computed as the average number of text being judged as fluent. We include generations from the standard Enc-Dec model. Table TABREF32 shows the best fluency is achieved for Enc-Dec. Imposing a content selector always affects the fluency a bit. The main reason is that when the controllability is strong, the change of selection will directly affect the text realization so that a tiny error of content selection might lead to unrealistic text. If the selector is not perfectly trained, the fluency will inevitably be influenced. When the controllability is weaker, like in RS, the fluency is more stable because it will not be affected much by the selection mask. For SS and Bo.Up, the drop of fluency is significant because of the gap of soft approximation and the independent training procedure. In general, VRS does properly decouple content selection from the enc-dec architecture, with only tiny degrade on the fluency.
Table TABREF33 / TABREF34 further measure the metric scores on Gigaword/Wikibio by decoding text from the best selection mask based on the selector's distribution (set INLINEFORM0 if INLINEFORM1 and 0 otherwise). We include results from VRS model with INLINEFORM2 , which puts no constraint on the mutual information. We further report the score by generating the best selection mask from the learned posterior distribution INLINEFORM3 for VRS model. Two current SOTA results from BIBREF8 and BIBREF53 and the proportion of selected source words for each model are also included. We have the following observations:
As the value of INLINEFORM0 decreases, the performance of VRS improves, but the selector loses more controllability because the model tends to over-select contents (over INLINEFORM1 source words selected). The text-content consistency will become low.
Increasing INLINEFORM0 sacrifices a bit performance, but still comparable with SOTA. Especially on Wikibio where the performance drop is minor. The reason should be that Wikibio is relatively easier to predict the selection but Gigaword has more uncertainty.
Increasing INLINEFORM0 improves the accuracy of the posterior selection. This would be useful when we want to perform posterior inference for some source-target pair.
Setting INLINEFORM0 can actually outperform SOTA seq2seq which keeps all tokens, suggesting it is still beneficial to use the VRS model even if we do not care about the controllability.
Figure FIGREF39 visualizes how changing the value of INLINEFORM0 affects the negative log likelihood (NLL), entropy of the selector and self-bleu score, which roughly correlates with performance, diversity and controllability. NLL is evaluated based on the lower bound in Eq EQREF15 BIBREF57 . We can see as INLINEFORM1 increases, the performance decreases gradually but the content selection gains more diversity and controllability. In practice we can tune the INLINEFORM2 value to achieve a trade-off.
Generation Example: Figure FIGREF40 shows some examples from Gigaword. As can be seen, decodings from the VRS model are largely consistent with each other, in most cases only replacing one or two words with corresponding synonyms. Samples are able to faithfully convey all selected contents. In contrast, generations from SS. Bo.Up. and RS are unpreditable, differing in both selected contents and also the way of saying. SS and Bo.Up also suffer more from the text disfluency. The generations from them are largely uncertain.
Conclusion
In this paper, we tackle the unaddressed problem of controllable content selection in text generation. We propose a general framework based on variational inference that can be potentiall applied to arbitrary tasks. On both the headline generation and data-to-text tasks, our model outperforms state-of-the-art models regarding the diversity and controllability of content selection. We further introduce an effective way to achieve a performance/controllability trade-off, which can be easily tuned to meet specific requirement.
Acknowledgments
We thank anonymous reviewers for valuable comments, thank Aditya Mogadala, Shun Kiyono, Thomas Mclachlan and other members of the LIAT team at RIKEN AIP for useful discussions. Xiaoyu Shen is supported by IMPRS-CS fellowship. The work of J. Suzuki was partly supported by JSPS KAKENHI Grant Number JP19104418 and AIRPF Grant Number 30AI036-8. This work is also partially funded by DFG collaborative research center SFB 1102.
Performance/Controllability trade-off
The trade-off between performance and interpretability has been a long-standing problem in feature selection BIBREF60 , BIBREF59 . The trade-off exists because it is usually very difficult to accurately find the exact features needed to make the prediction. Safely keeping more features will almost always lead to better performance. Some models do succeed in achieving superior performance by selecting only a subset of the input. However, they mostly still target at the recall of the selection BIBREF39 , BIBREF9 , BIBREF35 , i.e., to select all possible content that might help predict the target. The final selected contents reduce some most useful information from the source, but they still contain many redundant contents (same like our VRS-( INLINEFORM0 ) as in Table TABREF34 and TABREF33 ). This makes them unsuitable for controllable content selection. In text generation, a recent work from BIBREF41 shows they could control the contents by integrating a symbolic selector into the neural network. However, their selector is tailored by some rules only for the RDF triples. Moreover, even based on their fine-tuned selector, the fluency they observe is still slightly worse than a standard seq2seq.
We assume the content selector is the major bottle if we want a model that can achieve controllability without sacrificing the performance. We can clearly observe in Table TABREF34 that the performance drop in Wikibio is marginal compared with Gigaword. The reason should be that the selection on Wikibio is much easier than Gigaword. The biography of a person almost always follow some simple patterns, like name, birthday and profession, but for news headlines, it can contain information with various focuses. In our two tasks, due to the independence assumption we made on INLINEFORM0 and the model capacity limit, the content selector cannot fully fit the true selecting distribution, so the trade-off is necessary. Improving the selector with SOTA sequence labelling models like Bert BIBREF17 would be worth trying.
There are also other ways to improve. For example, we could learn a ranker to help us choose the best contents BIBREF63 . Or we could manually define some matching rules to help rank the selection BIBREF58 . In Table TABREF25 , we show the VRS model achieves very high metric scores based on an oracle ranker, so learning a ranker should be able to improve the performance straightforwardly.
Example from Wikibio
To see how we can manually control the content selection, Figure FIGREF42 shows an example from Wikibio, the model is mostly able to form a proper sentence covering all selected information. If the selector assigns very high probability to select some content and we force to remove it, the resulting text could be unnatual (as in summary 4 in Figure FIGREF42 because the model has seen very few text without containing the birthday information in the training corpus). However, thanks to the diversity of the content selector as shown in the previous section, it is able to handle most combinatorial patterns of content selection.
Posterior inference
Figure FIGREF41 further provides an example of how we can perform posterior inference given a provided text. Our model is able to infer which source contents are covered in the given summary. With the inferred selection, we can sample multiple paraphrases describing the same contents. As seen in Table TABREF34 and TABREF33 , the metric scores are remarkably high when decoding from the posterior inferred selections (last three rows), suggesting the posterior distribution is well trained. The posterior inference part could be beneficial for other tasks like content transfer among text BIBREF38 , BIBREF62 . The described source contents can be first predicted with the posterior inference, then transferred to a new text. | Unanswerable |
a035472a5c6cf238bed62b63d28100c546d40bd5 | a035472a5c6cf238bed62b63d28100c546d40bd5_0 | Q: How is the model trained to do only content selection?
Text: Introduction
Many text generation tasks, e.g., data-to-text, summarization and image captioning, can be naturally divided into two steps: content selection and surface realization. The generations are supposed to have two levels of diversity: (1) content-level diversity reflecting multiple possibilities of content selection (what to say) and (2) surface-level diversity reflecting the linguistic variations of verbalizing the selected contents (how to say) BIBREF0 , BIBREF1 . Recent neural network models handle these tasks with the encoder-decoder (Enc-Dec) framework BIBREF2 , BIBREF3 , which simultaneously performs selecting and verbalizing in a black-box way. Therefore, both levels of diversity are entangled within the generation. This entanglement, however, sacrifices the controllability and interpretability, making it diffifcult to specify the content to be conveyed in the generated text BIBREF4 , BIBREF5 .
With this in mind, this paper proposes decoupling content selection from the Enc-Dec framework to allow finer-grained control over the generation. Table TABREF2 shows an example. We can easily modify the content selection to generate text with various focuses, or sample multiple paraphrases by fixing the content selection.
Though there has been much work dealing with content selection for the Enc-Dec, none of them is able to address the above concerns properly. Current methods can be categorized into the following three classes and have different limits:
In this paper, we treat the content selection as latent variables and train with amortized variational inference BIBREF10 , BIBREF11 . This provides a lower training variance than Reinforce-select. The selector and generator are co-trained within the same objective, the generations are thus more faithful to the selected contents than Bottom-up methods. Our model is task-agnostic, end-to-end trainable and can be seamlessly inserted into any encoder-decoder architecture. On both the data-to-text and headline generation task, we show our model outperforms others regarding content-level diversity and controllability while maintaining comparable performance. The performance/controllability trade-off can be effectively adjusted by adjusting a single hyperparameter in the training stage, which constrains an upper bound of the conditional mutual information (CMI) between the selector and generated text BIBREF12 , BIBREF13 . A higher CMI leads to stronger controllability with a bit more risk of text disfluency.
In summary, our contributions are (1) systematically studying the problem of controllable content selection for Enc-Dec text generation, (2) proposing a task-agnostic training framework achieving promising results and (3) introducing an effective way to achieve the trade-off between performance and controllability.
Background and Notation
Let INLINEFORM0 denote a source-target pair. INLINEFORM1 is a sequence of INLINEFORM2 and can be either some structured data or unstructured text/image depending on the task. INLINEFORM3 corresponds to INLINEFORM4 which is a text description of INLINEFORM5 . The goal of text generation is to learn a distribution INLINEFORM6 to automatically generate proper text.
The Enc-Dec architecture handles this task with an encode-attend-decode process BIBREF3 , BIBREF14 . The encoder first encodes each INLINEFORM0 into a vector INLINEFORM1 . At each time step, the decoder pays attentions to some source embeddings and outputs the probability of the next token by INLINEFORM2 . INLINEFORM3 is a weighted average of source embeddings: DISPLAYFORM0
INLINEFORM0 is the hidden state of the decoder at time step INLINEFORM1 . INLINEFORM2 is a score function to compute the similarity between INLINEFORM3 and INLINEFORM4 BIBREF15 .
Content Selection
Our goal is to decouple the content selection from the decoder by introducing an extra content selector. We hope the content-level diversity can be fully captured by the content selector for a more interpretable and controllable generation process. Following BIBREF6 , BIBREF16 , we define content selection as a sequence labeling task. Let INLINEFORM0 denote a sequence of binary selection masks. INLINEFORM1 if INLINEFORM2 is selected and 0 otherwise. INLINEFORM3 is assumed to be independent from each other and is sampled from a bernoulli distribution INLINEFORM4 . INLINEFORM6 is the bernoulli parameter, which we estimate using a two-layer feedforward network on top of the source encoder. Text are generated by first sampling INLINEFORM7 from INLINEFORM8 to decide which content to cover, then decode with the conditional distribution INLINEFORM9 . The text is expected to faithfully convey all selected contents and drop unselected ones. Fig. FIGREF8 depicts this generation process. Note that the selection is based on the token-level context-aware embeddings INLINEFORM10 and will maintain information from the surrounding contexts. It encourages the decoder to stay faithful to the original information instead of simply fabricating random sentences by connecting the selected tokens.
For each source-target pair, the ground-truth selection mask is unknown, so training is challenging. In the following session, we discuss several training possibilities and introduce the proposed model in detail.
Bottom-up
The most intuitive way is training the content selector to target some heuristically extracted contents. For example, we can train the selector to select overlapped words between the source and target BIBREF6 , sentences with higher tf-idf scores BIBREF20 or identified image objects that appear in the caption BIBREF21 . A standard encoder-decoder model is independently trained. In the testing stage, the prediction of the content selector is used to hard-mask the attention vector to guide the text generation in a bottom-up way. Though easy to train, Bottom-up generation has the following two problems: (1) The heuristically extracted contents might be coarse and cannot reflect the variety of human languages and (2) The selector and decoder are independently trained towards different objectives thus might not adapt to each other well.
INLINEFORM0 as Latent Variable: Another way is to treat INLINEFORM1 as a latent variable and co-train selector and generator by maximizing the marginal data likelihood. By doing so, the selector has the potential to automatically explore optimal selecting strategies best fit for the corresponding generator component.
With this in mind. We design INLINEFORM0 by changing the original decoder in the following way: (1) We initialize hidden states of the decoder from a mean pooling over selected contents to inform the decoder which contents to cover and (2) Unselected contents will be prohibited from being attended to: DISPLAYFORM0
INLINEFORM0 is the initial decoder hidden state and MLP denotes multi-layer-perceptron.
Since computing the exact marginal likelihood INLINEFORM0 requires enumerating over all possible combinations of INLINEFORM1 (complexity INLINEFORM2 ), we need some way to efficiently estimate the likelihood.
Soft-Select
Soft-select falls back on a deterministic network to output the likelihood function's first-order Taylor series approximation expanded at INLINEFORM0 : INLINEFORM1
By moving the expectation into the decoding function, we can deterministically compute the likelihood by setting INLINEFORM0 , reducing complexity to INLINEFORM1 . Each attention weight will first be “soft-masked" by INLINEFORM2 before being passed to the decoder. soft-select is fully differentiable and can be easily trained by gradient descent. However, this soft-approximation is normally inaccurate, especially when INLINEFORM3 has a high entropy, which is common in one-to-many text generation tasks. The gap between INLINEFORM4 and INLINEFORM5 will be large BIBREF22 , BIBREF23 . In practice, this would lead to unrealistic generations when sampling INLINEFORM6 from the deterministically trained distribution.
Reinforce-Select
Reinforce-select (RS) BIBREF24 , BIBREF9 utilizes reinforcement learning to approximate the marginal likelihood. Specifically, it is trained to maximize a lower bound of the likelihood by applying the Jensen inequalily: DISPLAYFORM0
The gradient to INLINEFORM0 is approximated with Monte-Carlo sampling by applying the REINFORCE algorithm BIBREF25 , BIBREF26 . To speed up convergence, we pre-train the selector by some distant supervision, which is a common practice in reinforcement learning. REINFORCE is unbiased but has a high variance. Many research have proposed sophisticated techniques for variance reduction BIBREF11 , BIBREF27 , BIBREF28 . In text generation, the high-variance problem is aggravated because there exists multiple valid selections. Accurately estimating the likelihood becomes difficult. Another issue is its tendency to avoid stochasticity BIBREF29 , which we will show in Sec SECREF27 that it results in low content-level diversity.
Variational Reinforce-Select
We propose Variational Reinforce-Select (VRS) which applies variational inference BIBREF10 for variance reduction. Instead of directly integrating over INLINEFORM0 , it imposes a proposal distribution INLINEFORM1 for importance sampling. The marginal likelihood is lower bounded by: DISPLAYFORM0
By choosing a proper INLINEFORM0 , the bound will be improved and the variance can be largely reduced compared with REINFORCE. If INLINEFORM1 equals the posterior distribution INLINEFORM2 , the bound is tight and the variance would be zero BIBREF30 . We define INLINEFORM3 as a mean-field distribution parameterized by a set of global parameters INLINEFORM4 to approach the true posterior distribution. INLINEFORM5 , INLINEFORM6 and INLINEFORM7 are simultaneously trained by minimizing the last tine of Eq. EQREF15 . INLINEFORM8 also allows us to further perform posterior inference: Given an arbitrary text INLINEFORM9 for a source INLINEFORM10 , we can infer which source contents are included in INLINEFORM11 (An example is given in Appendix SECREF9 ).
In Eq. EQREF15 , the KL divergence term can be computed analytically. As for the independence assumption, it can be summed over each individual INLINEFORM0 . The likelihood term is differentiable to INLINEFORM1 but not to INLINEFORM2 , we estimate the gradient to INLINEFORM3 in Eq EQREF15 by applying the REINFORCE estimator: DISPLAYFORM0
INLINEFORM0 is the control variate BIBREF25 . The optimal INLINEFORM1 would be BIBREF31 : DISPLAYFORM0
which we set as a soft-select approximation: DISPLAYFORM0
We estimate Eq. EQREF16 with a single sample from INLINEFORM0 for efficiency. Though multiple-sample could potentially further tighten the bound and reduce the variance BIBREF32 , BIBREF33 , BIBREF34 , it brings significant computational overhead, especially in text generation tasks where the whole sentence needs to be decoded.
Degree of Controllability
In practice, when treating content selection as latent variables, the model tends to end up with a trivial solution of always selecting all source tokens BIBREF35 , BIBREF36 . This behavior is understandable since Eq. EQREF10 strictly masks unselected tokens. Wrongly unselecting one token will largely deteriorate the likelihood. Under the maximum likelihood (MLE) objective, this high risk pushes the selector to take a conservative strategy of always keeping all tokens, then the whole model degenerates to the standard Enc-Dec and the selection mask loses effects on the generation. Usually people apply a penalty term to the selecting ratio when optimizing the likelihood: DISPLAYFORM0
INLINEFORM0 is the MLE loss function, INLINEFORM1 is the mean of INLINEFORM2 and INLINEFORM3 is the target selecting ratio. This forces the selector to select the most important INLINEFORM4 tokens for each source input instead of keeping all of them.
In our VRS model, we can easily adjust the degree of controllability by limiting an upper bound of the conditional mutual information (CMI) INLINEFORM0 BIBREF13 . Specifically, we can change our objective into: DISPLAYFORM0
INLINEFORM0 is a fixed lagrangian multiplier. Eq. EQREF21 can be proved equal to maximum likelihood with the constraint INLINEFORM1 given proper INLINEFORM2 BIBREF12 . A higher INLINEFORM3 indicates INLINEFORM4 has more influences to INLINEFORM5 (higher controllability) while always safely selecting all tokens will lead INLINEFORM6 . It is preferred over Eq. EQREF20 because (a) CMI directly considers the dependency between the selection and multiple-possible text while limiting the ratio aims at finding the single most salient parts for each source. (b) Unlike CMI, limiting the ratio is coarse. It considers only the total selected size and ignores its internal distribution.
[tb] Variational Reinforce-Select (VRS) Parameters: INLINEFORM0 INLINEFORM1 TRUE Sample X,Y from the corpus; Encode X into INLINEFORM2 ;
INLINEFORM0 Update INLINEFORM1 with distant supervision;
Update INLINEFORM0 by INLINEFORM1 Eq. EQREF15 ; Update INLINEFORM2 by INLINEFORM3 Eq. EQREF21 ; INLINEFORM4 FALSE if Eq. EQREF15 degrades convergence and INLINEFORM5 is False
In practice, we can set INLINEFORM0 to adjust the degree of controllability we want. Later we will show it leads to a trade-off with performance. The final algorithm is detailed in Algorithm SECREF19 . To keep fairness, we trian RS and VRS with the same control variate and pre-training strategy.
Related Work
Most content selection models train the selector with heuristic rules BIBREF39 , BIBREF20 , BIBREF16 , BIBREF6 , BIBREF40 , BIBREF41 , which fail to fully capture the relation between selection and generation. BIBREF7 , BIBREF8 , BIBREF42 , BIBREF20 “soft-select" word or sentence embeddings based on a gating function. The output score from the gate is a deterministic vector without any probabilistic variations, so controlling the selection to generate diverse text is impossible. Very few works explicitly define a bernoulli distribution for the selector, then train with the REINFORCE algorithm BIBREF24 , BIBREF9 , but the selection targets at a high recall regardless of the low precision, so the controllability over generated text is weak. BIBREF43 control the generation by manually concatenating entity embeddings, while our model is much more flexible by explicitly defining the selection probability over all source tokens. Our work is closely related with learning discrete representations with variational inference BIBREF44 , BIBREF45 , BIBREF46 , BIBREF33 , where we treat content selection as the latent representation. Limiting the KL-term is a common technique to deal with the “posterior collapse" problem BIBREF47 , BIBREF48 , BIBREF49 . We adopt a similar approach and use it to further control the selecting strategy.
Experiments
For the experiments, we focus on comparing (1) Bottom-up generation (Bo.Up.), (2) soft-select (SS), (3) Reinforce-select (RS) and (4) Variational-Reinforce-select (VRS) regarding their performance on content selection. SS and RS are trained with the selecting ratio constraint in Eq. EQREF20 . For the SS model, we further add a regularization term to encourage the maximum value of INLINEFORM0 to be close to 1 as in BIBREF7 . We first briefly introduce the tasks and important setup, then present the evaluation results.
Tasks and Setup
We test content-selection models on the headline and data-to-text generation task. Both tasks share the same framework with the only difference of source-side encoders.
Headline Generation: We use English Gigaword preprocessed by BIBREF50 , which pairs first sentences of news articles with their headlines. We keep most settings same as in BIBREF8 , but use a vocabulary built by byte-pair-encoding BIBREF51 . We find it speeds up training with superior performance.
Data-to-Text Generation: We use the Wikibio dataset BIBREF52 . The source is a Wikipedia infobox and the target is a one-sentence biography description. Most settings are the same as in BIBREF53 , but we use a bi-LSTM encoder for better performance.
Heuristically extracted content: This is used to train the selector for bottom up models and pre-train the RS and VRS model. For wikibio, we simply extract overlapped words between the source and target. In Gigaword, as the headline is more abstractive, we select the closest source word for each target word in the embedding space. Stop words and punctuations are prohibited from being selected.
Choice of INLINEFORM0 : As seen in Sec SECREF19 , we need to set the hyperparameter INLINEFORM1 for RS/SS and INLINEFORM2 for VRS. INLINEFORM3 corresponds to the selecting ratio. We set them as INLINEFORM4 for Wikibio and INLINEFORM5 for Gigaword. The value is decided by running a human evaluation to get the empirical estimation. To keep comparison fairness, we tune INLINEFORM6 to make VRS select similar amount of tokens with RS. The values we get are INLINEFORM7 for Wikibio and INLINEFORM8 for Gigaword. INLINEFORM9 is the number of source tokens.
Results and Analysis
Ideally we would expect the learned content selector to (1) have reasonable diversity so that text with various contents can be easily sampled, (2) properly control the contents described in the generated text and (3) not hurt performance. The following section will evaluate these three points in order.
Diversity: We first look into the diversity of content selection learned by different models. For each test data, 50 selection masks are randomly sampled from the model's learned distribution. Greedy decoding is run to generate the text for each mask. We measure the entropy of the selector, proportion of unique selection masks and generated text in the 50 samples. We further define the “effect" of the selector as the ratio of sampled unique text and mask. This indicates how often changing the selection mask will also lead to a change in the generated text. The results are averaged over all test data. Following BIBREF50 and BIBREF52 , we measure the quality of generated text with ROUGE-1, 2, L F-score for Gigaword and ROUGE-4, BLEU-4, NIST for Wikibio. As there is only one reference text for each source, we report an oracle upper bound of these scores by assuming an “oracle" that can choose the best text among all the candidates BIBREF54 , BIBREF21 . Namely, out of each 50 sampled text, we pick the one with the maximum metric score. The final metric score is evaluated on these “oracle" picked samples. The intuition is that if the content selector is properly trained, at least one out of the 50 samples should describe similar contents with the reference text, the metric score between it and the reference text should be high. Table TABREF25 lists the results. We can have the following observations:
RS model completely fails to capture the content-level diversity. Its selector is largely deterministic, with a lowest entropy value among all models. In contrast, the selector from SS, VRS and Bo.Up. have reasonable diversity, with over INLINEFORM0 and INLINEFORM1 unique selection masks for Gigaword and Wikibio respectively.
The selector from VRS has the strongest effect to the generator, especially on the Gigaword data where modifying the content selection changes the corresponding text in more than 95% of the cases. RS has the lowest effect value, which indicates that even with the selecting ratio constraint, its generator still ignores the selection mask to a large extent.
The oracle metric score of VRS is much higher than the other two. This is beneficial when people want to apply the model to generate a few candidate text then hand-pick the suitable one. VRS has more potential than the other three to contain the expected text. SS performs worst. The gap between the soft approximation and the real distribution, as mentioned before, indeed results in a large drop of performance.
In short, compared with others, the content selector of VRS is (1) diverse, (2) has stronger effect on the text generation and (3) with a larger potential of producing an expected text.
Controllability: We have shown the content selector of VRS is diverse and has strong effect on the text generation. This section aims at examining whether such effect is desirable, i.e., whether the selector is able to properly control the contents described in the text. We measure it based on the self-bleu metric and a human evaluation.
The self-bleu metric measures the controllability by evaluating the “intra-selection" similarity of generated text. Intuitively, by fixing the selection mask, multiple text sampled from the decoder are expected to describe the same contents and thereby should be highly similar to each other. The decoder should only model surface-level diversity without further modifying the selected contents. With this in mind, for each test data, we randomly sample a selection mask from the selector's distribution, then fix the mask and run the decoder to sample 10 different text. The self-BLEU-1 score BIBREF55 on the sampled text is reported, which is the average BLEU score between each text pair. A higher self-BLEU score indicates the sampled text are more similar with each other. The results are shown in Table TABREF31 . We can see generations from VRS have a clearly higher intra-selection similarity. SS performs even worse than RS, despite having a high effect score in Table TABREF25 . The selector from SS affects the generation in an undesirable way, which also explain why SS has a lowest oracle metric score though with a high score on content diversity and effect.
We further run a human evaluation to measure the text-content consistency among different models. 100 source text are randomly sampled from the human-written DUC 2004 data for task 1&2 BIBREF56 . Bo.Up, SS, RS and VRS are applied to generate the target text by first sampling a selection mask, then run beam search decoding with beam size 10. We are interested in seeing (1) if multiple generations from the same selection mask are paraphrases to each other (intra-consistent) and (2) if generations from different selection masks do differ in the content they described (inter-diverse). The results in Table TABREF32 show that VRS significantly outperforms the other two in both intra-consistency and inter-diversity. RS has the lowest score on both because the selector has very weak effects on the generation as measured in the last section. Bo.Up and SS lay between them. Overall VRS is able to maintain the highest content-text consistency among them.
Performance INLINEFORM0 Trade-off: To see if the selector affects performance, we also ask human annotators to judge the text fluency. The fluency score is computed as the average number of text being judged as fluent. We include generations from the standard Enc-Dec model. Table TABREF32 shows the best fluency is achieved for Enc-Dec. Imposing a content selector always affects the fluency a bit. The main reason is that when the controllability is strong, the change of selection will directly affect the text realization so that a tiny error of content selection might lead to unrealistic text. If the selector is not perfectly trained, the fluency will inevitably be influenced. When the controllability is weaker, like in RS, the fluency is more stable because it will not be affected much by the selection mask. For SS and Bo.Up, the drop of fluency is significant because of the gap of soft approximation and the independent training procedure. In general, VRS does properly decouple content selection from the enc-dec architecture, with only tiny degrade on the fluency.
Table TABREF33 / TABREF34 further measure the metric scores on Gigaword/Wikibio by decoding text from the best selection mask based on the selector's distribution (set INLINEFORM0 if INLINEFORM1 and 0 otherwise). We include results from VRS model with INLINEFORM2 , which puts no constraint on the mutual information. We further report the score by generating the best selection mask from the learned posterior distribution INLINEFORM3 for VRS model. Two current SOTA results from BIBREF8 and BIBREF53 and the proportion of selected source words for each model are also included. We have the following observations:
As the value of INLINEFORM0 decreases, the performance of VRS improves, but the selector loses more controllability because the model tends to over-select contents (over INLINEFORM1 source words selected). The text-content consistency will become low.
Increasing INLINEFORM0 sacrifices a bit performance, but still comparable with SOTA. Especially on Wikibio where the performance drop is minor. The reason should be that Wikibio is relatively easier to predict the selection but Gigaword has more uncertainty.
Increasing INLINEFORM0 improves the accuracy of the posterior selection. This would be useful when we want to perform posterior inference for some source-target pair.
Setting INLINEFORM0 can actually outperform SOTA seq2seq which keeps all tokens, suggesting it is still beneficial to use the VRS model even if we do not care about the controllability.
Figure FIGREF39 visualizes how changing the value of INLINEFORM0 affects the negative log likelihood (NLL), entropy of the selector and self-bleu score, which roughly correlates with performance, diversity and controllability. NLL is evaluated based on the lower bound in Eq EQREF15 BIBREF57 . We can see as INLINEFORM1 increases, the performance decreases gradually but the content selection gains more diversity and controllability. In practice we can tune the INLINEFORM2 value to achieve a trade-off.
Generation Example: Figure FIGREF40 shows some examples from Gigaword. As can be seen, decodings from the VRS model are largely consistent with each other, in most cases only replacing one or two words with corresponding synonyms. Samples are able to faithfully convey all selected contents. In contrast, generations from SS. Bo.Up. and RS are unpreditable, differing in both selected contents and also the way of saying. SS and Bo.Up also suffer more from the text disfluency. The generations from them are largely uncertain.
Conclusion
In this paper, we tackle the unaddressed problem of controllable content selection in text generation. We propose a general framework based on variational inference that can be potentiall applied to arbitrary tasks. On both the headline generation and data-to-text tasks, our model outperforms state-of-the-art models regarding the diversity and controllability of content selection. We further introduce an effective way to achieve a performance/controllability trade-off, which can be easily tuned to meet specific requirement.
Acknowledgments
We thank anonymous reviewers for valuable comments, thank Aditya Mogadala, Shun Kiyono, Thomas Mclachlan and other members of the LIAT team at RIKEN AIP for useful discussions. Xiaoyu Shen is supported by IMPRS-CS fellowship. The work of J. Suzuki was partly supported by JSPS KAKENHI Grant Number JP19104418 and AIRPF Grant Number 30AI036-8. This work is also partially funded by DFG collaborative research center SFB 1102.
Performance/Controllability trade-off
The trade-off between performance and interpretability has been a long-standing problem in feature selection BIBREF60 , BIBREF59 . The trade-off exists because it is usually very difficult to accurately find the exact features needed to make the prediction. Safely keeping more features will almost always lead to better performance. Some models do succeed in achieving superior performance by selecting only a subset of the input. However, they mostly still target at the recall of the selection BIBREF39 , BIBREF9 , BIBREF35 , i.e., to select all possible content that might help predict the target. The final selected contents reduce some most useful information from the source, but they still contain many redundant contents (same like our VRS-( INLINEFORM0 ) as in Table TABREF34 and TABREF33 ). This makes them unsuitable for controllable content selection. In text generation, a recent work from BIBREF41 shows they could control the contents by integrating a symbolic selector into the neural network. However, their selector is tailored by some rules only for the RDF triples. Moreover, even based on their fine-tuned selector, the fluency they observe is still slightly worse than a standard seq2seq.
We assume the content selector is the major bottle if we want a model that can achieve controllability without sacrificing the performance. We can clearly observe in Table TABREF34 that the performance drop in Wikibio is marginal compared with Gigaword. The reason should be that the selection on Wikibio is much easier than Gigaword. The biography of a person almost always follow some simple patterns, like name, birthday and profession, but for news headlines, it can contain information with various focuses. In our two tasks, due to the independence assumption we made on INLINEFORM0 and the model capacity limit, the content selector cannot fully fit the true selecting distribution, so the trade-off is necessary. Improving the selector with SOTA sequence labelling models like Bert BIBREF17 would be worth trying.
There are also other ways to improve. For example, we could learn a ranker to help us choose the best contents BIBREF63 . Or we could manually define some matching rules to help rank the selection BIBREF58 . In Table TABREF25 , we show the VRS model achieves very high metric scores based on an oracle ranker, so learning a ranker should be able to improve the performance straightforwardly.
Example from Wikibio
To see how we can manually control the content selection, Figure FIGREF42 shows an example from Wikibio, the model is mostly able to form a proper sentence covering all selected information. If the selector assigns very high probability to select some content and we force to remove it, the resulting text could be unnatual (as in summary 4 in Figure FIGREF42 because the model has seen very few text without containing the birthday information in the training corpus). However, thanks to the diversity of the content selector as shown in the previous section, it is able to handle most combinatorial patterns of content selection.
Posterior inference
Figure FIGREF41 further provides an example of how we can perform posterior inference given a provided text. Our model is able to infer which source contents are covered in the given summary. With the inferred selection, we can sample multiple paraphrases describing the same contents. As seen in Table TABREF34 and TABREF33 , the metric scores are remarkably high when decoding from the posterior inferred selections (last three rows), suggesting the posterior distribution is well trained. The posterior inference part could be beneficial for other tasks like content transfer among text BIBREF38 , BIBREF62 . The described source contents can be first predicted with the posterior inference, then transferred to a new text. | target some heuristically extracted contents, treat INLINEFORM1 as a latent variable and co-train selector and generator by maximizing the marginal data likelihood, reinforcement learning to approximate the marginal likelihood, Variational Reinforce-Select (VRS) which applies variational inference BIBREF10 for variance reduction |
3213529b6405339dfd0c1d2a0f15719cdff0fa93 | 3213529b6405339dfd0c1d2a0f15719cdff0fa93_0 | Q: What is the baseline model used?
Text: Introduction
Seeking information to assess whether some products or services suit one's needs is a vital activity for consumer decision making. In online businesses, one major hindrance is that customers have limited access to answers to their specific questions or concerns about products and user experiences. Given the ever-changing environment of products and services, it is very hard, if not impossible, to pre-compile an up-to-date knowledge base to answer user questions as in KB-QA BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . As a compromise, community question-answering (CQA) BIBREF4 is leveraged to enable existing customers or sellers to answer customer questions. However, one obvious drawback of this approach is that many questions are not answered, and even if they are answered, the answers and the following up questions are delayed, which is not suitable for interactive QA. Although existing studies have used information retrieval (IR) techniques BIBREF4 , BIBREF5 to identify a whole review as an answer to a question, it is time-consuming to read a whole review and the approach has difficulty to answer questions in multiple turns.
Inspired by recent research in Conversational Reading Comprehension (CRC) BIBREF6 , BIBREF7 , we explore the possibility of turning reviews as a source of valuable knowledge of experiences and to provide a natural way of answering customers' multiple-turn questions in a dialogue setting. The conversational setting of machine reading comprehension (MRC) enables more specific questions and allow customers to either omit or co-reference information in context. As an example in a laptop domain shown in Table 1 , a customer may have 5 turns of questions based on the context. The customer first has an opinion question targeting an aspect “retina display” of a to-be-purchased laptop. Then the customer carries (and omit) the question type opinion from the first question to the second and continually asking the second aspect “boot-up speed”. For the third question, the customer carries the aspect of the second question but change the question type to opinion explanation. Later, the customer can co-reference the aspect “SSD” from the previous answer and ask for the capacity (a sub-aspect) of “SSD”. Unfortunately, there is no answer in this review for the fourth question so the review may say “I don't know”. But the customer can still ask other aspects as in the fifth question. We formally define this problem as follows and call it review conversational reading comprehension (RCRC).
Problem Definition: Given a review that consists of a sequence of $n$ tokens $d=(d_1, \dots , d_n)$ , a history of past $k-1$ questions and answers as the context $C=(q_1, a_1, q_2, a_2, \dots , q_{k-1}, a_{k-1})$ and the current question $q_k$ , find a sequence of tokens (a textual span) $a=(d_s, \dots , d_e)$ in $d$ that answers $q_k$ based on $C$ , where $1 \le s \le n$ , $d=(d_1, \dots , d_n)$0 , and $d=(d_1, \dots , d_n)$1 , or return NO ANSWER ( $d=(d_1, \dots , d_n)$2 ) if the review does not contain any answer for $d=(d_1, \dots , d_n)$3 .
RCRC is a novel QA task that requires the understanding of both the current question $q_k$ and dialogue context $C$ . Compared to the traditional single-turn MRC, the key challenge is how to understand the context $C$ and the current question $q_k$ given it may have a co-reference resolution or context carryover.
To the best of our knowledge, there are no existing review datasets for RCRC. We first build a dataset called $(\text{RC})_2$ based on laptop and restaurant reviews from SemEval 2016 Task 5. We choose this dataset to better align with existing research on review-based tasks in sentiment analysis. Each review is annotated with a few dialogues focusing on some topics. Note that although one dialogue is annotated on a single review, a trained RCRC model can potentially be deployed among an open set of reviews BIBREF8 where the context may potentially contain answers from different reviews. Given the wide spectrum of domains in online business (e.g., thousands of categories on Amazon.com) and the prohibitive cost of annotation, $(\text{RC})_2$ is designed to have limited supervision as in other tasks of sentiment analysis. We adopt BERT (Bidirectional Encoder Representation from Transformers BIBREF9 ) as our base model since its variants achieve dominant performance on MRC BIBREF10 , BIBREF11 and CRC BIBREF6 tasks. However, BERT is designed to learn features for a wide spectrum of NLP tasks with a large amount of training examples. The task-awareness of BERT can be hindered by the weak supervision of the $(\text{RC})_2$ dataset. To resolve this challenge, we introduce a novel pre-tuning stage between pre-training and end-task fine-tuning for BERT. The pre-tuning stage is formulated in a similar fashion as the RCRC task but requires no annotated RCRC data and just domain QA pairs (from CQA) and reviews, which are readily available online BIBREF4 . We bring certain characteristics of the RCRC task (inputs/outputs) to pre-tuning to encourage BERT's weight to be prepared for understanding the current question and locate the answer if there exists one. The proposed pre-tuning step is general and can potentially be used in MRC or CRC tasks in other domains.
The main contributions of this paper are as follows. (1) It proposes a practical new task on reviews that allows multi-turn conversational QA. (2) To address this problem, an annotated dataset is first created. (3) It then proposes a pre-tuning stage to learn task-aware representation. Experimental results show that the proposed approach achieves competitive performance even compared with the supervised approach on a large-scale training data.
Related Works
MRC (or CRC) has been studied in many domains with formal written texts, e.g., Wikipedia (WikiReading BIBREF12 , SQuAD BIBREF10 , BIBREF11 , WikiHop BIBREF13 , DRCD BIBREF14 , QuAC BIBREF7 , HotpotQA BIBREF15 ), fictional stories (MCTest BIBREF16 , CBT BIBREF17 , NarrativeQA BIBREF18 ), general Web documents (MS MARCO BIBREF19 , TriviaQA BIBREF20 , SearchQA BIBREF21 ) and news articles (NewsQA BIBREF22 , CNN/Daily Mail BIBREF23 , and RACE BIBREF24 ). Recently, CRC BIBREF6 , BIBREF25 , BIBREF26 gains increasing popularity as it allows natural multi-turn questions. Examples are QuAC BIBREF7 and CoQA BIBREF6 . CoQA is built from multiple sources, such as Wikipedia, Reddit, News, Mid/High School Exams, Literature, etc. To the best of our knowledge, CRC has not been used on reviews, which are primarily subjective. Our $(\text{RC})_2$ dataset is compatible with the format of CoQA datasets so all CoQA-based models can be easily adapted to our dataset. Answers from $(\text{RC})_2$ are intended to be extractive (similar to SQuAD BIBREF10 , BIBREF11 ) rather than abstractive (generative) (such as in MS MARCO BIBREF19 and CoQA BIBREF6 ) because we believe online businesses are cost-sensitive so relying on human written answers are more reliable than machine generated answers.
Traditionally, knowledge bases (KBs) (such as Freebase BIBREF27 , BIBREF3 , BIBREF28 or DBpedia BIBREF29 , BIBREF30 ) have been used for question-answering BIBREF5 . However, the ever-changing environment of online businesses launches new products and services appear constantly, making it prohibitive to build a high-quality KB to cover all new products, services and subjective experiences from customers. Community QA (CQA) is widely adopted by online businesses BIBREF4 to help users get answers for their questions. However, since the answers are written by humans, it often takes a long time to get a question answered or even not answered at all as we discussed in the introduction section. There exist researches that align reviews to questions in CQA as an information retrieval task BIBREF4 , BIBREF5 , but a whole review is hard to read and not suitable for follow-up questions. We novelly use CQA data for CRC (or potentially for MRC), which play a significant role in encouraging domain representation learning on questions and contexts, which are largely ignored in existing research on MRC (or CRC).
Preliminary
In this section, we briefly review BERT (Bidirectional Encoder Representation from Transformers BIBREF9 ), which is one of the key innovations of unsupervised contextualized representation learning BIBREF31 , BIBREF32 , BIBREF33 , BIBREF9 . The idea behind these innovations is that although the word embedding BIBREF34 , BIBREF35 layer is trained from large-scale corpora, relying on the limited supervised data from end-tasks to train the contextualized representation is insufficient. Unlike ELMo BIBREF31 and ULMFiT BIBREF32 that are designed to provide additional features for an end task, BERT adopts a fine-tuning approach that requires almost no specific architecture design for end tasks, but parameter intensive models on BERT itself. As such, BERT requires pre-training on large-scale data (Wikipedia articles) to fill intensive parameters in exchange for human structured architecture designs for specific end-tasks that carry human's understanding of data of those tasks.
One training example of BERT is formulated as $(\texttt {[CLS]}, x_{1:j}, \texttt {[SEP]}, x_{j+1:n}, \texttt {[SEP]})$ , where [CLS] and [SEP] are special tokens and $x_{1:n}$ is a document splited into two sides of sentences $x_{1:j}$ and $x_{j+1:n}$ . The key performance gain of BERT comes from two novel pre-training objectives: masked language model (MLM) and next text sentence prediction.
Masked Language Model enables learning bidirectional language models and essentially encourages a BERT model to predict randomly masked words given their contexts. This is crucial for RCRC. For example, an example can be “its amazingly [MASK] when it boots up because of the [MASK] storage”. These two [MASK]'s encourage BERT to guess that the first mark could be “fast”and the second mask could be“SSD” so as to learn some common knowledge on aspects of laptops and their potential opinions.
Next Sentence Prediction further encourages BERT to learn inter-sentence representations by predicting whether two sides around the first $\texttt {[SEP]}$ are from the same document or not. We remove this objective in our pre-tuning as the text format is different from BERT pre-training (discussed in the next Section).
In summary, we can see that the pre-trained BERT severely lacks RCRC task-awareness as there is no formulation for either context $C$ , the current turn question $q_{k}$ or possible answer spans as Wikipedia contains almost no questions or domain knowledge about online businesses. We resolve these issues in the next section.
Task-awareness Pre-tuning
To address the limitation of BERT on task-awareness, we introduce an intermediate stage of pre-tuning between BERT pre-training and fine-tuning on RCRC. This works in a similar spirit to the invention of BERT (or any other pre-trained language models) because it is also insufficient to learn the end task definition (or setting) solely on the limited supervised data (of that task). The task-awareness is determined by the inputs and outputs of RCRC, which introduce two directions for pre-tuning: (1) understanding the text inputs, including both domains and text formats (e.g., contexts, current questions). (2) understanding the goal of RCRC, including both having a text span or no answer. As such, we first define the textual format that is shared by both the RCRC and BERT pre-tuning in Section "Conclusions" . Then we introduce an auxiliary pre-tuning objective in Section "Auxilary Objective" .
Textual Format
Inspired by the recent implementation of DrQA for CoQA BIBREF6 and BERT for SQuAD, we formuate an input example $x$ for pre-tuning (or RCRC) from the context $C$ , the current question $q_k$ , and the review $d$ as follows:
$$\begin{split} x=(\texttt {[CLS]} \texttt {[Q]} q_1 \texttt {[A]} a_1 \dots \texttt {[Q]} q_{k-1} \texttt {[A]} a_{k-1} \\ \texttt {[Q]} q_{k} \texttt {[SEP]} d_{1:n} \texttt {[SEP]}), \nonumber \end{split}$$ (Eq. 7)
where past QA pairs $q_1, a_1, \dots , q_{k-1}, a_{k-1}$ in $C$ are concatenated and separated by two special tokens [Q] and [A] and then concatenate with the current question $q_k$ as the left side of BERT and the right side is the review document. This format will be used for both pre-tuning and RCRC task fine-tuning. Note that the answer for a question with no answer in the context is written as a single word “unknown”. One can observe that although this format is simple and intuitive for humans to read, BERT's pre-trained weights have no idea the semantics behind this format (e.g., where is the current question, how many turns in the context and where is the previous turn), let alone the special tokens [Q] and [A] never appear during BERT pre-training.
Pre-tuning Data Generation
Based on the format defined in Section "Conclusions" , we can observe that getting BERT to be familiar with domain reviews is as easy as continually training BERT on reviews. However, enabling BERT to understand the context $C$ and the current question $q_k$ is more challenging as the pre-training data of BERT has almost no question. To resolve this issue, we combine QA pairs (from CQA data) and reviews to formulate the pre-tuning examples, as shown in Algorithm "Pre-tuning Data Generation" . Note that these two kinds of data are often readily available across a wide range of products in Amazon.com and Yelp.com.
[t] Data Generation Algorithm InputInput OutputOutput $\mathcal {T}$ : pre-tuning data.
$\mathcal {T} \leftarrow \lbrace \rbrace $ $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$ $x \leftarrow \texttt {[CLS]} $ $h \leftarrow \text{RandInteger}([0, h_{\text{max}}]) $ $1 \rightarrow h$ $q^{\prime \prime }, a^{\prime \prime } \leftarrow \text{RandSelect}(\mathcal {Q}\backslash (q^{\prime }, a^{\prime }))$ $ x \leftarrow x \oplus \texttt {[Q]} \oplus q^{\prime \prime } \oplus \texttt {[A]} \oplus a^{\prime \prime } $ $ x \leftarrow x \oplus \texttt {[Q]} \oplus q^{\prime } \texttt {[SEP]} $ $ r_{1:s} \leftarrow \text{RandSelect}(\mathcal {R}) $ $\text{RandFloat}([0.0, 1.0]) > 0.5$ $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$0 $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$1 $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$2 $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$3 $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$4 $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$5 $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$6 $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$7 $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$8 $(q^{\prime }, a^{\prime }) \in \mathcal {Q}$9
To ensure the topic of a pre-tuning example is consistent between QAs and reviews, we assume QA pairs and reviews are organized under each entity (a laptop or a restaurant in our experiment) that customers focus on. The inputs to Algorithm "Pre-tuning Data Generation" are a set of QA pairs and reviews belonging to the same entity and the maximum turns in the context that is the same as the RCRC datasets. The output is the pre-tuning data as initialized in Line 1, where each example is denoted as $(x, (s, e))$ . Here $x$ is the input example and $(s, e)$ is the two pointers for the auxiliary objective (discussed in Section "Auxilary Objective" ). Given a QA pair $(q^{\prime }, a^{\prime })$ in Line 2, we first build the left side of input example $x$ in Line 3-9. After initializing input $x$ in Line 3, we randomly determine the number of turns as context in Line 4 and concatenate $\oplus $ these turns of QA pairs in Line 5-8, where $\mathcal {Q}\backslash (q^{\prime }, a^{\prime })$ ensures the current QA pair $(q^{\prime }, a^{\prime })$ is not chosen. In Line 9, we concatenate the current question $q^{\prime }$ . Lines 10-23 build the right side of input example $x$0 and the outputs pointers In Line 10, we randomly draw a review with $x$1 sentences. To challenge the pre-tuning stage to discover the semantic relatedness between $x$2 and $x$3 (as for the auxiliary objective), we first decide whether to allow the right side of $x$4 contains $x$5 (Line 16) for $x$6 or a fake random answer $x$7 Lines 11-12. We also come up with two pointers $x$8 and $x$9 initialized in Lines 13 and 17. Then, we insert $(s, e)$0 into review $(s, e)$1 by randomly pick one from the $(s, e)$2 locations in Lines 19-20. This gives us $(s, e)$3 , which has $(s, e)$4 tokens. We further update $(s, e)$5 and $(s, e)$6 to allow them to point to the chunk boundaries of $(s, e)$7 . Otherwise, BERT should detect as no $(s, e)$8 on the right side and point to [CLS] ( $(s, e)$9 ). Finally, examples are aggregated in Line 25.
Algorithm "Pre-tuning Data Generation" is run $p=10$ times to allow for enough samplings of data. As we can see, although labeled training examples for RCRC are expensive to obtain, harvest a large amount of pre-tuning data is easy. Following the success of BERT, we still randomly mask some words in each example $x$ to learn contextualized representations on domain texts.
Auxilary Objective
Besides adapting input $x$ to domains and RCRC task, it is also desirable to allow pre-tuning to adapt BERT to the goal of RCRC tasks, which is to predict a token span or NO ANSWER. Besides MLM from BERT, we further introduce an auxiliary objective called answer chunk detection to align BERT to a similar fashion as RCRC, except that we only predict the token spans of an answer chunk from CQA. Further, these tasks challenge BERT to be prepared for predicting NO ANSWER from a review by detecting a negative randomly drawn answer.
Let $\text{BERT}(\cdot )$ be the BERT's transformer model. We first obtain the hidden representation of BERT as $h=\text{BERT}(x)$ . Then the hidden representation is passed to two separate dense layers followed by softmax functions: $l_1=\text{softmax}(W_1 \cdot h + b_1)$ and $l_2=\text{softmax}(W_2 \cdot h + b_2)$ , where $W_1$ , $W_2 \in \mathbb {R}^{r_h}$ and $b_1, b_2 \in \mathbb {R}$ and $r_h$ is the size of the hidden dimension (e.g., 768 for $\text{BERT}_{\text{BASE}}$ ). Training involves minimizing the averaged cross entropy on the two pointers $s$ and $h=\text{BERT}(x)$0 generated in Algorithm "Pre-tuning Data Generation" : $h=\text{BERT}(x)$1
where $\mathbb {I}(s)$ and $\mathbb {I}(e)$ are one-hot vectors representing the two starting and ending positions. For a positive example (with true answer $a^{\prime }$ randomly inserted in the review), $s$ and $e$ are expected to be $\text{Idx}_{\texttt {[SEP]}} < s\le |x|$ and $s\le e\le |x|$ , respectively, where $\text{Idx}_{\texttt {[SEP]}}$ is the position of the first [SEP]. For a negative example (with a random answer (not $a^{\prime }$ ) mixed into the review), $s,e=1$ indicates the two pointers must point to [CLS].
After pre-tuning, we fine-tune on the RCRC task in a similar fashion to the auxiliary objective, except this time there is no need to perform MLM.
Experiments
We aim to answer the following research questions (RQs) in the experiment:
RQ1: What is the performance of using BERT compared against CoQA baselines ?
RQ2: Upon ablation studies of different applications of BERT, what is the performance gain of pre-tuning ?
RQ3: What is the performance of pre-tuning compared to using (large-scale) supervised data?
Pre-tuning datasets
To be consistent with existing research on review-based tasks such as sentiment analysis, we adopt SemEval 2016 Task 5 as the review source for RCRC, which contains two domains laptop and restaurant. Then we collect reviews and QA pairs for these two domains. For the laptop domain, we collect the reviews from BIBREF36 and QA pairs from BIBREF37 both under the laptop category of Amazon.com. We exclude products in the test data of $(\text{RC})_2$ to make sure the test data is not used for on any model parameters. This gives us 113,728 laptop reviews and 18,589 QA pairs. For the restaurant domain, we collect reviews from Yelp dataset challenges but crawl QA pairs from Yelp.com . We select restaurants with at least 100 reviews as other restaurants seldom have any QA pairs. This ends with 753,096 restaurant reviews and 15,457 QA pairs.
To compare with a supervised pre-tuning approach, we further leverage the CoQA dataset BIBREF6 . It comes with 7,199 documents (passages) and 108,647 QA pairs of supervised training data with domains in Children’s Story. Literature Mid/High School, News, and Wikipedia.
(RC) 2 (\text{RC})_2 Datasets
To the best of our knowledge, there are no existing datasets for RCRC. We keep the split of training and testing of the SemEval 2016 Task 5 datasets and annotate dialogues of QAs on each review. To ensure our questions are real-world questions, annotators are first asked to read CQAs of the pre-tuning data. Each dialogue is annotated to focus on certain topics of a review. The textual spans are kept to be as short as possible but still human-readable. No-answer questions are also annotated, which have certain topical connections with the nearby questions or answers. Annotators are encouraged to label about 2 dialogues from a testing review to get enough testing examples. One training review is encouraged to have 1 dialogue to have good coverage of reviews. Each question is shortened as much as possible to omit existing information in the past turns. The annotated data is in the format of CoQA BIBREF6 to help future research. The statistics of $(\text{RC})_2$ dataset is shown in Table 2 . We split 20% of the training reviews as the validation set for each domain.
Compared Methods
We compare the following methods by training/fine-tuning on $(\text{RC})_2$ . All the baselines are run using their default hyper-parameters.
DrQA is a CRC baseline coming with the CoQA dataset. Note that this implementation of DrQA is different from DrQA for SQuAD BIBREF8 in that it is modified to support answering no answer questions by having a special token unknown at the end of the document. So having a span with unknown indicates NO ANSWER. This baseline answers the research question RQ1.
DrQA+CoQA is the above baseline pre-tuned on CoQA dataset and then fine-tuned on $(\text{RC})_2$ . We use this baseline to show that even DrQA pre-trained on CoQA is sub-optimal for RCRC. This baseline is used to answer RQ1 and RQ3.
BERT is the vanilla BERT model directly fine-tuned on $(\text{RC})_2$ . We use this baseline for ablation study on the effectiveness of pre-tuning. All these BERT's variants are used to answer RQ2.
BERT+review first tunes BERT on domain reviews using the same objectives as BERT pre-training and then fine-tunes on $(\text{RC})_2$ . We use this baseline to show that a simple domain-adaptation of BERT is not good.
BERT+CoQA first fine-tunes BERT on the supervised CoQA data and then fine-tunes on $(\text{RC})_2$ . We use this baseline to show that pre-tuning is very competitive even compared with models trained from large-scale supervised data. This also answers RQ3.
BERT+Pre-tuning first pre-tunes BERT as proposed and then fine-tunes on $(\text{RC})_2$ .
Hyper-parameters and Evaluation Metrics
We choose BERT base model as our pre-tuning and fine-tuning model, which has 12 layers, 768 hidden dimensions and 12 attention heads (in transformer) with total parameters of 110M. We cannot use the BERT large model as we cannot fit it into our GPU memory for training. We set the maximum length to be 256 with a batch size of 16. We perform pre-tuning for 10k steps as further increasing the pre-tuning steps doesn't yield better results. We fine-tune 6 epochs, though most runs converged just within 3 epochs due to the pre-trained/tuned weights of BERT. Results are reported as averages of 3 runs of fine-tuning (3 different random seeds for tuning batch generation).
To be consistent with existing research, we leverage the same evaluation script from CoQA. Similar to the evaluation of SQuAD 2.0, CoQA script reports turn-level Exact Match (EM) and F1 scores for all turns in all dialogues. EM requires the answers to have exact string match with human annotated answer spans. F1 score is the averaged F1 scores of individual answers, which is typically higher than EM and is the major metric.
Result Analysis
As shown in Table 3 , BERT+Pre-tuning has significant performance gains over many baselines. To answer RQ1, we can see that BERT is better than DrQA baseline from CoQA. To answer RQ2, we notice that by leveraging BERT+Pre-tuning, we have about 9% performance gain. Note that directly using review documents to continually pre-training BERT does not yield better results for BERT+review. We suspect the task of RCRC still requires certain degrees of general language understanding and BERT+review has the effect of (catastrophic) forgetting BIBREF38 the strength of BERT. To answer RQ3, we notice that large-scale supervised CoQA data can boost the performance for both DrQA and BERT. However, our pre-tuning stage still has competitive performance and it requires no annotation at all.
Conclusions
In this paper, we propose a novel task called review conversational reading comprehension (RCRC). We investigate the possibility of interactive question answering by using reviews as knowledge of user experiences. We first build a dataset called $(\text{RC})_2$ , which is derived from popular review datasets for sentiment analysis. To resolve the issues of limited supervision introduced by the prohibitive cost of annotation, we introduce a novel pre-tuning stage to perform task-adaptation from a language model. This pre-tuning stage can potentially be used for any MRC or CRC task given it has no requirement on annotation but large QA and review corpora available online. Experimental results show that the pre-tuning approach is highly effective and outperforms existing baselines or highly competitive with supervised baselines trained from a large-scale dataset. | The baseline models used are DrQA modified to support answering no answer questions, DrQA+CoQA which is pre-tuned on CoQA dataset, vanilla BERT, BERT+review tuned on domain reviews, BERT+CoQA tuned on the supervised CoQA data |
70afd28b0ecc02eb8e404e7ff9f89879bf71a670 | 70afd28b0ecc02eb8e404e7ff9f89879bf71a670_0 | Q: Is this auto translation tool based on neural networks?
Text: Introduction
Reading and writing comments, method names and variable names is a crucial part of software engineering and as such, programs have both a human language, the language of identifiers and comments, in addition to the source-code language (eg Java or Python). This has meant that non-English speakers are often second class citizens when learning to program BIBREF0. In this paper we present a tool for translating a program from one human-language to another to assist in code education, which could reduce the barrier to computer science education for non-English speakers.
The main contributions presented in this paper are:
Analysis of 1.1M non-English code projects on GitHub
CodeInternational: A tool which can translate code between human languages, powered by Google Translate.
Validation of CodeInternational by evaluating the translation of 1,000 randomly chosen projects from GitHub.
Use of CodeInternational to automatically translate the popular Karel textbook into 100+ languages. We further extend the textbook to parse and run KarelJava code in any language; we report adoption by classrooms around the world.
Our human-language code translator was inspired by a desire to make programming more accessible BIBREF1. An accurate and useful translator would enable faster localization of instruction materials and it would allow learners (as well as practitioners) to translate code that they are working with.
As programming becomes more of a requisite common knowledge skill, we expect coding education to become open-access to everyone. One barrier to this goal is human language. English is currently the modal language of programming instruction perhaps given that the keywords of most of the popular languages, Java, JavaScript etc, are in English (even including Python and Lua, invented in the Netherlands and Brasil respectively). However, a majority of the world, estimated in 2008 at 80%, can't “use" English for communication and substantially more don't speak English as their L1 language (the technical term for one's arterial language, aka, mother tongue) BIBREF2. Should the more than 6 billion non-English speakers learn to program in their native language or in English? This question is debated, which we address in the discussion.
We take the position that whether or not code instruction is in English, if students do not speak English as their L1 language, their code education would benefit from the ability to translate Code between their preferred language and English.
Introduction ::: Related Work
To the best of our knowledge, automatic translation of code between human languages, did not appear in literature, making us hypothesize: it is either difficult, or had remained ignored. Nonetheless, we summarize related work that motivate our contribution.
Translation of Text automatic translation of natural language has recently achieved high accuracy and is used in highly sensitive contexts BIBREF3, BIBREF4, BIBREF5. At the time of writing this article, Google Translate uses Neural Machine Translation BIBREF6 to translate pairwise between languages and has become incredibly accurate, at least for languages common on the web BIBREF7. Further research has been done on transliterating text BIBREF8, BIBREF9. However, current state-of-the-art methods for text translation fail at translating code. Directly running a translation algorithm on code would fail to distinguish between code syntax and identifiers, would not recognize terms embedded in identifiers e.g. with camel case getElementAt, and could produce code with one identifier name having different translations on separate lines. As such, current automatic text translation, if ran directly on code, would produce malfunctional code. Code Instruction in Non-English In 2017, Dasgupta and Hill published seminal work outlining the importance of learning to code in one's own language. They conclude that "novice users who code with their programming language keywords and environment localized into their home countries' primary language demonstrate new programming concepts at a faster rate than users from the same countries whose interface is in English" BIBREF10. Since then, there has been a large set of papers expanding on the barriers for non-native English speakers. Guo et al survey over 800 non-English students learning who report on the many challenges that come with not understanding English while coding. BIBREF11 reinforced by BIBREF12, BIBREF13. This has led to preliminary work into translating compiler errors BIBREF14 and advocation for language-free block free programming BIBREF15. However, while language-free programming is a great step forward for younger students, it doesn't address the needs of CS1 students who program in common programming languages like Python or Java. While all of this work motivates our contribution, none has attempted an automatic solution to the problem, making crowd-translation a viable alternative BIBREF16.
Mining Github To understand the patterns of code that students and practitioners use, we analyze public repositories on GitHub. Other researchers also analyzed GitHub, sometimes via the dataset and tools provided by BIBREF17, including work on social diversity of teams BIBREF18 and affiliation influence on code popularity BIBREF19. This has led to a set of best practices for navigating the promises and perils of mining GitHub BIBREF20. A growing number of students are using GitHub in software engineering courses BIBREF21 which makes it a valuable resource for understanding code of the general population, including students.
Code Conversion There is a rich literature of work to translate code between programming languages, such as C or C++ to Java BIBREF22, BIBREF23, or even from English to code BIBREF24. However, the emphasis is often on maintaining efficiency, not on making code readable for students. We focus on translating the human language of code. Byckling et al BIBREF25 analyze naming conventions of identifiers based their function (fixed, iterators, transformers, etc), and correlate the naming consistency with the students' learning experience. This motivates aspects of our translation. See Section SECREF22.
Human Languages on GitHub
How do non-English speakers program in a language like Java, where the keywords and core libraries are written in English? We employ a data driven approach to tell the story of non-English code and inform the decisions we made in our auto-translator. We analyzed Java repositories on GitHub, the largest host of source code in the world, where 1.1 million unique users host 2.9 million public Java projects. We downloaded and analyzed the human language used for writing comments (in Java code), naming identifiers (method and variable names), and writing git commit messages. We focused on Java code as it is both one of the most popular source-code languages on GitHub and in the classroom. A selection of results from this study are that:
Non-English code is a large-scale phenomena.
Transliteration is common in identifiers for all languages.
Languages clusters into three distinct groups based on how speakers use identifiers/comments/transliteration.
Non-latin script users write comments in their L1 script but write identifiers in English.
Right-to-left (RTL) language scripts, such as Arabic, have no observed prevalence on GitHub identifiers, implying that existing coders who speak RTL languages have substantial barriers in using their native script in code.
This is, to the best of our knowledge, the first analysis of the human languages on GitHub. See Figure FIGREF6 for an overview.
Users on GitHub do not state their L1 (arterial) language. While a subset of users optionally state their country this is neither common nor reliable. To estimate a user's preferred language we use the language that they use in the git commit message. To find subsets of users who speak a given language, we search for all users who write git commits in that language. We observe that, especially in personal projects, users write commit messages in their L1 language at a higher rate than comments or identifiers. To identify languages we use Google Language Detect which is highly accurate (more so for common internet languages) and can identify languages with non-Roman Alphabet text which has been transliterated, for example it can detect bothUTF8gbsn算法 the Chinese characters for “algorithm" and "suanfa", the Mandarin transliteration, as Chinese.
Of the 1.1 million GitHub users, 12.7% wrote commit messages in non-English languages. Of the non-English languages Chinese was the most common (28.6% of non-English committers), followed by Spanish, Portuguese, French, and Japanese. More than 100 languages were detected in commit messages on public Java projects. Figure FIGREF6 contains breakdowns and the appendix contains the full list. This does not match the distribution of non-English in web content (55% English) with both major and minor languages underrepresented. For example the prevalence of Spanish on GitHub (2.1%) is about half of webcontent (5.1% BIBREF26) and further trails native speakers (7.8% of the worlds population BIBREF27).
Github does not present a random sample of programs written in the world, and we consider the relevant confounds this introduces. To that point, we believe the under-representation of certain languages is a form of Survivorship Bias. It suggests that users have found barriers to entry towards joining the GitHub community. Those barriers could derive from the English dominance of programming languages, code instruction, or the github interface.
Human Languages on GitHub ::: Non-English in Java
The use of non-English in identifiers and comments is large for the population of users who we define as non-English "speakers" (those who use non-English in their git-commit messages). 90% of users who use a non-English language in the commit messages also use that language in their comments or as identifiers. We note that, in Java, identifiers can be written in any script.
Surprisingly, the patterns of non-English usage differs substantially when we condition on users "speaking" different languages. For example, among the detected Spanish speakers, 87.2% percent of users write identifiers in Spanish. On the other hand, among Chinese users only 23.3% of users write code with Chinese identifiers (either in Chinese script or ASCII). Figure 1b shows coding patterns conditioned on users speaking different languages. For each language we plot the percent of projects with identifiers in the language against the percent of projects with comments in the language. Languages naturally cluster into three categories: (1) Major-Euro-Latin: languages with high use of non-English identifier including Spanish, German and French (2) Non-Latin: languages in non-latin scripts including Russian and Chinese which have low use of non-English identifiers and (3) English-Comment: Programmers write their comments in English (> 70% of projects only have English comments). This group contains many smaller and non-European languages like Dutch and Bahasa Indonesia. 0% of projects in this group still uses their L1 language in identifiers.
The use of identifiers in local language (as opposed to English) is very clearly split on whether languages use the Latin alphabet. On average 82% of projects from users speak languages with different scripts like Chinese, Korean, or Russian have only English identifiers, compared to 12% of projects from Latin alphabet users ($p < 0.0001$). The percentage of projects with only English comments is roughly correlated to the English Proficiency Index BIBREF28 of the corresponding countries ($\rho = 0.42$ $p < 0.01$).
Human Languages on GitHub ::: Transliteration on GitHub
Transliteration is the process of transferring a word from the alphabet of one language to another (eg -> namaste duniya). We observed that most Java code with human languages that have non-ascii scripts like Kanji, Devanagari, or even Spanish accents like ñ, will have been "transliterated" into ascii.
The Java Language Specification states that, "letters and digits (in identifiers) may be drawn from the entire Unicode character set, which supports most writing scripts". This specification is not widely known, and even if Java supports non-ascii , there can be complexities of file encodings across different operating systems.
We find that regardless of L1 language most users transliterate identifiers: among L1 Chinese speakers, 93% of projects have identifiers which are only written in ASCII. Similarly in Spanish 88% of projects have only ASCII identifiers. As a concrete example, in GitHub Java code "numero" is 3.8x more common than "número". Among comments languages differ greatly: 99% of Chinese projects have non ASCII comments compared to only 53% of Spanish. As an example a comment above a method specifies in script that it is calculating the Fibonacci sequence however the method name (an identifier) is transliterated "//UTF8gbsn斐波那契" however the code uses a transliteration of the phonemes in the script "public int feibonaqie(int n)". This is a common pattern: Within comments, UTF8gbsn计数 chinese for count), is 4.0x more common than jishu, the transliteration. However in identifiers jishu is 4.8x more common. The difference in transliteration patterns between Chinese and Spanish suggests a different intent: in Spanish transliteration is used to avoid file encoding errors, in Chinese it is to prevent a mix of scripts among identifiers.
Human Languages on GitHub ::: Right-to-Left Languages on GitHub
One question that we did not have a solid pre-conception for was: How do Java users who speak languages with right-to-left (RTL) scripts like Arabic, Urdu or Hebrew, write code?
18,961 users on GitHub report their country as one where a RTL script (Arabic or Hebrew) is the primary script. Those users have 8,060 public Java repositories of which only 50 repositories (0.6%) have Arabic or Hebrew script (excluding string literals). Of those repositories, only a single Java file had a single identifier written in Arabic and none in Hebrew. It is extremely rare for methods or identifiers to be a mix of RTL and LTR.
Code International
The GitHub analysis is coherent with the contemporary narrative: there are perhaps hundreds of millions of learners who will not speak English as their L1 language. For those learners, teachers need a tool to translate code so they can give examples with less congitive load. Similarly students need a tool to understand the non-English code they encounter. Finally, to a growing extent English speakers will begin to interact with code written in other languages.
To adress this need, we designed a tool to help programmers, regardless of their spoken language, access code in many languages. The tool, which we call CodeInternational, takes in code written in either Java or Python with comments and identifiers written in a human-language and translates the comments and identifiers into another human-language. It supports the growing set of human languages covered by Google Translate and is adaptive to the particular context of source-code. To translate code, it first parses the code and extracts four types of tokens:
[leftmargin=3mm]
Comments: inline or multi-line comments. Their purpose is for the programmer to communicate to programmers (including herself) on the purpose of code sections.
Immutable: consisting of language keywords (while, void, etc), and identifiers imported from libraries that are external to the code being translated (e.g. FileReader of java.io). By default this group is not translated.
Target identifiers: including variable and function names that are defined in the code base undergoing translation.
String literals: In some cases a user may want String literals to be translated, other times they should be unchanged.
Our translation algorithm is as follows. We (1) collect all of the target identifiers defined in the codebase and (2) translate them (enforcing that if two identifiers have the same name, they are given the same translation). Once the identifiers are translated we (3) translate the comments preserving structure and references to identifiers. (4) Finally string literals are optionally translated. See Figure FIGREF20 for a highlevel depiction and Figure FIGREF23 for a concrete example. Each of these steps has surprising challenges. In this section we cover the corresponding solutions we developed. The mapping of identifier translations that the tool decides on is preserved to assist any external source which needs to refer to the newly translated identifiers (such as text in a text-book or code in a related project).
CodeInternational is implemented in Python. Tokenization is performed using a modified version of "Javalang" (for Java) and the "Parser" library (for Python). Supporting other programming languages requires a small amount of extra work.
Code International ::: Translating Identifiers
In order to properly translate identifiers, we consider the following:
Identifier segmentation: Translating an identifier using a tool like Google Translate does not work by default as identifiers are often composed of unsegmented words. For example: getFavoriteNumber is readable to a human as "get favorite number" but is not parsable by an online translator. We segment identifiers using naming conventions (e.g. camelCaseVariable, PascalCaseClass, UPPERCASE_CONSTANT). We thus segment identifiers into phrases which we feed into an automatic translator. We then recombine the translated phrase using the original casing convention. For example, to translate the method name identifier "turnAround" into Spanish: "turnAround" is segmented into "turn around" which is translated into "media vuelta" which is formatted into the original camelCase "mediaVuelta". Advances in artificial intelligence for word segmentation enable a future version of this tool to break up words without a given segmentation (eg "turnaround").
Verb prior: The correct translation for a phrase can be ambiguous, especially without context. As an example the method "move" translated into Spanish could be translated into a noun ("movimiento", movement) or a verb ("moverse"). For method identifiers there is an implicit context that an action is being performed. We incorporate this context by placing a prior on the first word being a verb. Thus, for example, when we translate "move()" into Spanish we chose "moverse()" instead of "movimiento()", the noun movement, as Google suggests.
In addition to knowing the translations of methods should start with verbs, we also have a select number of reasonable tenses for the verb: infinitive (eg "toMove"), third person present (eg "moves" as in "he moves") and imperative (eg "move"). In most languages, including English, we translate verbs with a prior that they be the imperative tense. In English you would expect a method to be "getObject()" the imperative. However some languages, especially Romance languages, use the infinitive of the verb: as an example, "obtener" the infinitive of "obtain" is 200x more common on GitHub then "obtenga" the imperative.
Translating short identifiers: Short variable names that are used for mathematical symbols or as iterators should not be translated. This is especially important to pay attention to for the cannonical for loop identifier "i". For example translating the code "for(int i = 0; i < 10; i++)" into Spanish should not produce "for(int yo = 0; yo < 10; yo++)" even though "yo" is the translation of the pronoun "I". We only translate identifiers which are at least two characters long. This exception has its own edge-case: CJK (Chinese, Japanese Korean) identifiers can be non-mathematical names even if only a character long.
Code International ::: Translating comments
Once we have finished translating identifiers, we translate the comments in a program. Translating comments has two complexities: (1) we would like to maintain the comment structure, eg if it is a block javadoc comment, we would like to reserve the column of '*'s on the left margin of the comment and (2) we want references to identifiers to be translated exactly as they were in the code.
To translate a comment we classify the structure (eg JavaDoc, BlockComment PythonDocString). We then strip the text out, translate it, and reformat it back into the same structure. For multi-line comments we are conscious not to increase the maximum length of a line, taking into account the wider width of CJK characters.
Code International ::: Translating Right-to-Left languages
Arabic, Hebrew, Farsi, and Urdu are popular right-to-left (RTL) natural languages. When translating code to RTL languages, comment can be translated (mixing RTL within the left-to-right syntax) and optionally transliterated (keeping left-to-right flow). Some of the difficulty in RTL transliteration is in distinguishing between short- and long-vowels. Further, these languages contains consonant that cannot be described using Latin alphabets, which are generally represented with numbers in the transliteration culture – e.g. 7 for utf8 ح > , which is closest to Latin alphabet “h” e.g. in “Ahmad”.
When translating non-Latin scripts which are LTR we give the user the option to transliterate identifiers and separately, to transliterate comments or not. Transliteration is currently supported in Arabic, Chinese, Hebrew, Japanese, Korean, and Russian.
Code International ::: Prior and posterior translations
Translations of code need to be coherent with respect to other translations of written text (or other files) that refers to the code. To that end our translator takes in, and uses, a preset identifier translation map and returns the translations it made. This system enables having humans override translations, translating text-books with text that references embedded code and more.
Translating Github
How good is a translation of source-code from one human language to another? Evaluating quality of a translation is hard without a large collection of native speakers and since we are powered by Google, evaluation can devolve into evaluating how accurate Google Translation is. Such an evaluation is a moving target: Google Translation is perennially improving.
To evaluate out translator we randomly selected 1,000 (1k) single file projects from public GitHub Java and translated them into the languages: Chinese, Spanish and Arabic. We measure (1) how often the translated code still compiles and (2) what percent of identifiers that we attempt to translate are translatable.
Of the 1k projects 100% maintained their ability to be compiled regardless of whether we translated or transliterated the comments or identifiers. From the 1k projects 91% of the identifiers were able to be translated. The nine percent that were not able to be translated were mainly abbreviations (such as users who named a variable frac instead of fraction or pct instead of percent). This is an opportunity for future work. Overall the results paint the picture of a functioning tool which is ready for use.
International Karel
Our motivation for developing an automatic human-language code translation tool was to support education for non-English speakers. To that end we used CodeInternational to translate a web-version of the popular Karel the Robot learns Java reader by Eric Robers BIBREF29 a textbook for a Karel the Robot, a grid world robot invented by Richard Pattis BIBREF30 to help CS1 students learn to program. Karel has been the inspiration for assignments on platforms such as Code.org and CodeHS and is a staple of the first weeks of CS1 BIBREF31.
We translated a Karel reader in Python and Java to 100 languages. The translated web-reader is free to use, and is hosted at [redacted]. At time of publication the reader has been public (without advertising) and has already been used by over 3,000 people from 50 countries. With permission from Eric Roberts, we first made an eBook version of his Karel reader and simplified the English used BIBREF32. The reader merges text and code in a seemless fashion. Then, for each language: we (a) translated code on each chapter using CodeInternational and (b) translated the reader text such that any reference to identifiers in the example code would use the same translations. In order to have text which is consistent with the corresponding code we heavily rely on the "Posterior identifier translation map" from CodeInternational's translations.
International Karel ::: Line-highlighting in any language
To make the Karel reader a fantastic learning experience we made it so that each code-snippet is runnable. When run, the program executes the code and highlights the corresponding lines as the program is run, regardless of the complexity of the program's control flow. In order to line-highlight we parse and compile the Python-Karel or Java-Karel programs using an engine written in JavaScript. Our line-highlighter builds upon the compiler described in Informatics Education using Nothing but a Browser BIBREF33.
Our Karel reader can run and line-highlight in any human-language that we translate into. For example our compiler can execute and line-highlight the command "moverse()" if the code is written in Spanish, "UTF8gbsn移动()" if the program is written in Chinese, "emshi()" if the program is written in Arabic, or "move()" if the Karel program is written in English. We chose to only transliterate commands for RTL scripts. Figure FIGREF27 shows three screenshots from the international Karel reader, though of course a PDF is unable to capture the ability of the reader to line-highlight code.
International Karel ::: Usage in Classrooms
We know of four classes where the internationalized Karel eReader has been used. These classes are around the world in: Istanbul, Bogota, Prague and [Redacted]. The eReader has been visited by >1k users in 3 months and both the English and the non-English version of the website have a high average session duration (9.7 min and 10.1 min respectively). Moreover, the tool has been used to translate the CSBridge curriculum website and assignments; HTML that mixes code and description (used by 400 students / year).
Discussion
Whether English should be used as the sole language of instruction has been debated. Case for code instruction in English: In order to program professionally, one will have to interact with keywords and libraries that are written for English speakers. English is the language of code, and it is practically required from anyone who wants to interact globally: correspond via email, read stack-overflow, watch educational videos, travel, etc. For classrooms where English is the main form of instruction, but students are not yet fluent, CodeInternational can be used to assist learning English and learning to program. Students could improve their English through coding, e.g. by placing English code against their L1 code, side-by-side. Case for instruction on transl(iter)ated code: On the other hand, people argue that it is beneficial for students to have much of their coding instruction in their L1 language, and doing so benefits access to CS. The primary reason for this intuitive: the cognitive-load of learning to program is already high. Moreover, if students learn coding using their L1 language and enjoy it, they become intrinsically motivated to learn English, knowing that English would broaden their access to learning material (learning earning a language, with no short-term motives, could be dull especially for young students). In this context CodeInternational can help students who are interacting with libraries in English. Perhaps more importantly our tool can help teachers rapidly develop localized content that builds off English content. The alternative: manual-translation of API, code-examples and website text, can be a huge barrier to translating material. Finally, our tool builds off GoogleTranslate, which is high accurate, but charges $1 per 50,000 characters. A free version would have a huge impact on utility.
We call for future work from tool experts, for extending popular code-editors (e.g. vim, XCode, Visual Studio, Eclipse) to integrate with our APIs for back-and-forth translation and side-by-side display. Optionally, integrating with automatic text-to-speech (e.g. BIBREF34) could allow students learn English pronunciation of code components. Moreover, one remaining feature in automatic human-translation of code is identifier consistency: if two identifiers have specific terms in common, eg getHeight, setHeight, we would like the translation of "height" to be consistent. While they are often consistent in our work, it is not enforced. Full consistency is hard, but not impossible, with modern neural machine translation.
Conclusion
We analyze millions of non-English Java programs on GitHub to inform our understanding of patterns of human-language and make some surprising observations. We build CodeInternational, an open-source tool which can translate Java or Python code between human languages. We evaluate our tool and use it to make an internationalized Karel eReader (with runnable code) in 100+ languages. Our tool is already being used in classrooms around the world, a trend we hope to continue supporting.
Conclusion ::: Acknowledgements
We would like to graciously thank XXX and YYY for contributing code to this translation project. We would also like to thank ZZZ for her inspiration. We also thank the WWW teachers for educating students around the world in their local language. | Yes |
42c02c554ab4ceaf30a8ca770be4f271887554c2 | 42c02c554ab4ceaf30a8ca770be4f271887554c2_0 | Q: What are results of public code repository study?
Text: Introduction
Reading and writing comments, method names and variable names is a crucial part of software engineering and as such, programs have both a human language, the language of identifiers and comments, in addition to the source-code language (eg Java or Python). This has meant that non-English speakers are often second class citizens when learning to program BIBREF0. In this paper we present a tool for translating a program from one human-language to another to assist in code education, which could reduce the barrier to computer science education for non-English speakers.
The main contributions presented in this paper are:
Analysis of 1.1M non-English code projects on GitHub
CodeInternational: A tool which can translate code between human languages, powered by Google Translate.
Validation of CodeInternational by evaluating the translation of 1,000 randomly chosen projects from GitHub.
Use of CodeInternational to automatically translate the popular Karel textbook into 100+ languages. We further extend the textbook to parse and run KarelJava code in any language; we report adoption by classrooms around the world.
Our human-language code translator was inspired by a desire to make programming more accessible BIBREF1. An accurate and useful translator would enable faster localization of instruction materials and it would allow learners (as well as practitioners) to translate code that they are working with.
As programming becomes more of a requisite common knowledge skill, we expect coding education to become open-access to everyone. One barrier to this goal is human language. English is currently the modal language of programming instruction perhaps given that the keywords of most of the popular languages, Java, JavaScript etc, are in English (even including Python and Lua, invented in the Netherlands and Brasil respectively). However, a majority of the world, estimated in 2008 at 80%, can't “use" English for communication and substantially more don't speak English as their L1 language (the technical term for one's arterial language, aka, mother tongue) BIBREF2. Should the more than 6 billion non-English speakers learn to program in their native language or in English? This question is debated, which we address in the discussion.
We take the position that whether or not code instruction is in English, if students do not speak English as their L1 language, their code education would benefit from the ability to translate Code between their preferred language and English.
Introduction ::: Related Work
To the best of our knowledge, automatic translation of code between human languages, did not appear in literature, making us hypothesize: it is either difficult, or had remained ignored. Nonetheless, we summarize related work that motivate our contribution.
Translation of Text automatic translation of natural language has recently achieved high accuracy and is used in highly sensitive contexts BIBREF3, BIBREF4, BIBREF5. At the time of writing this article, Google Translate uses Neural Machine Translation BIBREF6 to translate pairwise between languages and has become incredibly accurate, at least for languages common on the web BIBREF7. Further research has been done on transliterating text BIBREF8, BIBREF9. However, current state-of-the-art methods for text translation fail at translating code. Directly running a translation algorithm on code would fail to distinguish between code syntax and identifiers, would not recognize terms embedded in identifiers e.g. with camel case getElementAt, and could produce code with one identifier name having different translations on separate lines. As such, current automatic text translation, if ran directly on code, would produce malfunctional code. Code Instruction in Non-English In 2017, Dasgupta and Hill published seminal work outlining the importance of learning to code in one's own language. They conclude that "novice users who code with their programming language keywords and environment localized into their home countries' primary language demonstrate new programming concepts at a faster rate than users from the same countries whose interface is in English" BIBREF10. Since then, there has been a large set of papers expanding on the barriers for non-native English speakers. Guo et al survey over 800 non-English students learning who report on the many challenges that come with not understanding English while coding. BIBREF11 reinforced by BIBREF12, BIBREF13. This has led to preliminary work into translating compiler errors BIBREF14 and advocation for language-free block free programming BIBREF15. However, while language-free programming is a great step forward for younger students, it doesn't address the needs of CS1 students who program in common programming languages like Python or Java. While all of this work motivates our contribution, none has attempted an automatic solution to the problem, making crowd-translation a viable alternative BIBREF16.
Mining Github To understand the patterns of code that students and practitioners use, we analyze public repositories on GitHub. Other researchers also analyzed GitHub, sometimes via the dataset and tools provided by BIBREF17, including work on social diversity of teams BIBREF18 and affiliation influence on code popularity BIBREF19. This has led to a set of best practices for navigating the promises and perils of mining GitHub BIBREF20. A growing number of students are using GitHub in software engineering courses BIBREF21 which makes it a valuable resource for understanding code of the general population, including students.
Code Conversion There is a rich literature of work to translate code between programming languages, such as C or C++ to Java BIBREF22, BIBREF23, or even from English to code BIBREF24. However, the emphasis is often on maintaining efficiency, not on making code readable for students. We focus on translating the human language of code. Byckling et al BIBREF25 analyze naming conventions of identifiers based their function (fixed, iterators, transformers, etc), and correlate the naming consistency with the students' learning experience. This motivates aspects of our translation. See Section SECREF22.
Human Languages on GitHub
How do non-English speakers program in a language like Java, where the keywords and core libraries are written in English? We employ a data driven approach to tell the story of non-English code and inform the decisions we made in our auto-translator. We analyzed Java repositories on GitHub, the largest host of source code in the world, where 1.1 million unique users host 2.9 million public Java projects. We downloaded and analyzed the human language used for writing comments (in Java code), naming identifiers (method and variable names), and writing git commit messages. We focused on Java code as it is both one of the most popular source-code languages on GitHub and in the classroom. A selection of results from this study are that:
Non-English code is a large-scale phenomena.
Transliteration is common in identifiers for all languages.
Languages clusters into three distinct groups based on how speakers use identifiers/comments/transliteration.
Non-latin script users write comments in their L1 script but write identifiers in English.
Right-to-left (RTL) language scripts, such as Arabic, have no observed prevalence on GitHub identifiers, implying that existing coders who speak RTL languages have substantial barriers in using their native script in code.
This is, to the best of our knowledge, the first analysis of the human languages on GitHub. See Figure FIGREF6 for an overview.
Users on GitHub do not state their L1 (arterial) language. While a subset of users optionally state their country this is neither common nor reliable. To estimate a user's preferred language we use the language that they use in the git commit message. To find subsets of users who speak a given language, we search for all users who write git commits in that language. We observe that, especially in personal projects, users write commit messages in their L1 language at a higher rate than comments or identifiers. To identify languages we use Google Language Detect which is highly accurate (more so for common internet languages) and can identify languages with non-Roman Alphabet text which has been transliterated, for example it can detect bothUTF8gbsn算法 the Chinese characters for “algorithm" and "suanfa", the Mandarin transliteration, as Chinese.
Of the 1.1 million GitHub users, 12.7% wrote commit messages in non-English languages. Of the non-English languages Chinese was the most common (28.6% of non-English committers), followed by Spanish, Portuguese, French, and Japanese. More than 100 languages were detected in commit messages on public Java projects. Figure FIGREF6 contains breakdowns and the appendix contains the full list. This does not match the distribution of non-English in web content (55% English) with both major and minor languages underrepresented. For example the prevalence of Spanish on GitHub (2.1%) is about half of webcontent (5.1% BIBREF26) and further trails native speakers (7.8% of the worlds population BIBREF27).
Github does not present a random sample of programs written in the world, and we consider the relevant confounds this introduces. To that point, we believe the under-representation of certain languages is a form of Survivorship Bias. It suggests that users have found barriers to entry towards joining the GitHub community. Those barriers could derive from the English dominance of programming languages, code instruction, or the github interface.
Human Languages on GitHub ::: Non-English in Java
The use of non-English in identifiers and comments is large for the population of users who we define as non-English "speakers" (those who use non-English in their git-commit messages). 90% of users who use a non-English language in the commit messages also use that language in their comments or as identifiers. We note that, in Java, identifiers can be written in any script.
Surprisingly, the patterns of non-English usage differs substantially when we condition on users "speaking" different languages. For example, among the detected Spanish speakers, 87.2% percent of users write identifiers in Spanish. On the other hand, among Chinese users only 23.3% of users write code with Chinese identifiers (either in Chinese script or ASCII). Figure 1b shows coding patterns conditioned on users speaking different languages. For each language we plot the percent of projects with identifiers in the language against the percent of projects with comments in the language. Languages naturally cluster into three categories: (1) Major-Euro-Latin: languages with high use of non-English identifier including Spanish, German and French (2) Non-Latin: languages in non-latin scripts including Russian and Chinese which have low use of non-English identifiers and (3) English-Comment: Programmers write their comments in English (> 70% of projects only have English comments). This group contains many smaller and non-European languages like Dutch and Bahasa Indonesia. 0% of projects in this group still uses their L1 language in identifiers.
The use of identifiers in local language (as opposed to English) is very clearly split on whether languages use the Latin alphabet. On average 82% of projects from users speak languages with different scripts like Chinese, Korean, or Russian have only English identifiers, compared to 12% of projects from Latin alphabet users ($p < 0.0001$). The percentage of projects with only English comments is roughly correlated to the English Proficiency Index BIBREF28 of the corresponding countries ($\rho = 0.42$ $p < 0.01$).
Human Languages on GitHub ::: Transliteration on GitHub
Transliteration is the process of transferring a word from the alphabet of one language to another (eg -> namaste duniya). We observed that most Java code with human languages that have non-ascii scripts like Kanji, Devanagari, or even Spanish accents like ñ, will have been "transliterated" into ascii.
The Java Language Specification states that, "letters and digits (in identifiers) may be drawn from the entire Unicode character set, which supports most writing scripts". This specification is not widely known, and even if Java supports non-ascii , there can be complexities of file encodings across different operating systems.
We find that regardless of L1 language most users transliterate identifiers: among L1 Chinese speakers, 93% of projects have identifiers which are only written in ASCII. Similarly in Spanish 88% of projects have only ASCII identifiers. As a concrete example, in GitHub Java code "numero" is 3.8x more common than "número". Among comments languages differ greatly: 99% of Chinese projects have non ASCII comments compared to only 53% of Spanish. As an example a comment above a method specifies in script that it is calculating the Fibonacci sequence however the method name (an identifier) is transliterated "//UTF8gbsn斐波那契" however the code uses a transliteration of the phonemes in the script "public int feibonaqie(int n)". This is a common pattern: Within comments, UTF8gbsn计数 chinese for count), is 4.0x more common than jishu, the transliteration. However in identifiers jishu is 4.8x more common. The difference in transliteration patterns between Chinese and Spanish suggests a different intent: in Spanish transliteration is used to avoid file encoding errors, in Chinese it is to prevent a mix of scripts among identifiers.
Human Languages on GitHub ::: Right-to-Left Languages on GitHub
One question that we did not have a solid pre-conception for was: How do Java users who speak languages with right-to-left (RTL) scripts like Arabic, Urdu or Hebrew, write code?
18,961 users on GitHub report their country as one where a RTL script (Arabic or Hebrew) is the primary script. Those users have 8,060 public Java repositories of which only 50 repositories (0.6%) have Arabic or Hebrew script (excluding string literals). Of those repositories, only a single Java file had a single identifier written in Arabic and none in Hebrew. It is extremely rare for methods or identifiers to be a mix of RTL and LTR.
Code International
The GitHub analysis is coherent with the contemporary narrative: there are perhaps hundreds of millions of learners who will not speak English as their L1 language. For those learners, teachers need a tool to translate code so they can give examples with less congitive load. Similarly students need a tool to understand the non-English code they encounter. Finally, to a growing extent English speakers will begin to interact with code written in other languages.
To adress this need, we designed a tool to help programmers, regardless of their spoken language, access code in many languages. The tool, which we call CodeInternational, takes in code written in either Java or Python with comments and identifiers written in a human-language and translates the comments and identifiers into another human-language. It supports the growing set of human languages covered by Google Translate and is adaptive to the particular context of source-code. To translate code, it first parses the code and extracts four types of tokens:
[leftmargin=3mm]
Comments: inline or multi-line comments. Their purpose is for the programmer to communicate to programmers (including herself) on the purpose of code sections.
Immutable: consisting of language keywords (while, void, etc), and identifiers imported from libraries that are external to the code being translated (e.g. FileReader of java.io). By default this group is not translated.
Target identifiers: including variable and function names that are defined in the code base undergoing translation.
String literals: In some cases a user may want String literals to be translated, other times they should be unchanged.
Our translation algorithm is as follows. We (1) collect all of the target identifiers defined in the codebase and (2) translate them (enforcing that if two identifiers have the same name, they are given the same translation). Once the identifiers are translated we (3) translate the comments preserving structure and references to identifiers. (4) Finally string literals are optionally translated. See Figure FIGREF20 for a highlevel depiction and Figure FIGREF23 for a concrete example. Each of these steps has surprising challenges. In this section we cover the corresponding solutions we developed. The mapping of identifier translations that the tool decides on is preserved to assist any external source which needs to refer to the newly translated identifiers (such as text in a text-book or code in a related project).
CodeInternational is implemented in Python. Tokenization is performed using a modified version of "Javalang" (for Java) and the "Parser" library (for Python). Supporting other programming languages requires a small amount of extra work.
Code International ::: Translating Identifiers
In order to properly translate identifiers, we consider the following:
Identifier segmentation: Translating an identifier using a tool like Google Translate does not work by default as identifiers are often composed of unsegmented words. For example: getFavoriteNumber is readable to a human as "get favorite number" but is not parsable by an online translator. We segment identifiers using naming conventions (e.g. camelCaseVariable, PascalCaseClass, UPPERCASE_CONSTANT). We thus segment identifiers into phrases which we feed into an automatic translator. We then recombine the translated phrase using the original casing convention. For example, to translate the method name identifier "turnAround" into Spanish: "turnAround" is segmented into "turn around" which is translated into "media vuelta" which is formatted into the original camelCase "mediaVuelta". Advances in artificial intelligence for word segmentation enable a future version of this tool to break up words without a given segmentation (eg "turnaround").
Verb prior: The correct translation for a phrase can be ambiguous, especially without context. As an example the method "move" translated into Spanish could be translated into a noun ("movimiento", movement) or a verb ("moverse"). For method identifiers there is an implicit context that an action is being performed. We incorporate this context by placing a prior on the first word being a verb. Thus, for example, when we translate "move()" into Spanish we chose "moverse()" instead of "movimiento()", the noun movement, as Google suggests.
In addition to knowing the translations of methods should start with verbs, we also have a select number of reasonable tenses for the verb: infinitive (eg "toMove"), third person present (eg "moves" as in "he moves") and imperative (eg "move"). In most languages, including English, we translate verbs with a prior that they be the imperative tense. In English you would expect a method to be "getObject()" the imperative. However some languages, especially Romance languages, use the infinitive of the verb: as an example, "obtener" the infinitive of "obtain" is 200x more common on GitHub then "obtenga" the imperative.
Translating short identifiers: Short variable names that are used for mathematical symbols or as iterators should not be translated. This is especially important to pay attention to for the cannonical for loop identifier "i". For example translating the code "for(int i = 0; i < 10; i++)" into Spanish should not produce "for(int yo = 0; yo < 10; yo++)" even though "yo" is the translation of the pronoun "I". We only translate identifiers which are at least two characters long. This exception has its own edge-case: CJK (Chinese, Japanese Korean) identifiers can be non-mathematical names even if only a character long.
Code International ::: Translating comments
Once we have finished translating identifiers, we translate the comments in a program. Translating comments has two complexities: (1) we would like to maintain the comment structure, eg if it is a block javadoc comment, we would like to reserve the column of '*'s on the left margin of the comment and (2) we want references to identifiers to be translated exactly as they were in the code.
To translate a comment we classify the structure (eg JavaDoc, BlockComment PythonDocString). We then strip the text out, translate it, and reformat it back into the same structure. For multi-line comments we are conscious not to increase the maximum length of a line, taking into account the wider width of CJK characters.
Code International ::: Translating Right-to-Left languages
Arabic, Hebrew, Farsi, and Urdu are popular right-to-left (RTL) natural languages. When translating code to RTL languages, comment can be translated (mixing RTL within the left-to-right syntax) and optionally transliterated (keeping left-to-right flow). Some of the difficulty in RTL transliteration is in distinguishing between short- and long-vowels. Further, these languages contains consonant that cannot be described using Latin alphabets, which are generally represented with numbers in the transliteration culture – e.g. 7 for utf8 ح > , which is closest to Latin alphabet “h” e.g. in “Ahmad”.
When translating non-Latin scripts which are LTR we give the user the option to transliterate identifiers and separately, to transliterate comments or not. Transliteration is currently supported in Arabic, Chinese, Hebrew, Japanese, Korean, and Russian.
Code International ::: Prior and posterior translations
Translations of code need to be coherent with respect to other translations of written text (or other files) that refers to the code. To that end our translator takes in, and uses, a preset identifier translation map and returns the translations it made. This system enables having humans override translations, translating text-books with text that references embedded code and more.
Translating Github
How good is a translation of source-code from one human language to another? Evaluating quality of a translation is hard without a large collection of native speakers and since we are powered by Google, evaluation can devolve into evaluating how accurate Google Translation is. Such an evaluation is a moving target: Google Translation is perennially improving.
To evaluate out translator we randomly selected 1,000 (1k) single file projects from public GitHub Java and translated them into the languages: Chinese, Spanish and Arabic. We measure (1) how often the translated code still compiles and (2) what percent of identifiers that we attempt to translate are translatable.
Of the 1k projects 100% maintained their ability to be compiled regardless of whether we translated or transliterated the comments or identifiers. From the 1k projects 91% of the identifiers were able to be translated. The nine percent that were not able to be translated were mainly abbreviations (such as users who named a variable frac instead of fraction or pct instead of percent). This is an opportunity for future work. Overall the results paint the picture of a functioning tool which is ready for use.
International Karel
Our motivation for developing an automatic human-language code translation tool was to support education for non-English speakers. To that end we used CodeInternational to translate a web-version of the popular Karel the Robot learns Java reader by Eric Robers BIBREF29 a textbook for a Karel the Robot, a grid world robot invented by Richard Pattis BIBREF30 to help CS1 students learn to program. Karel has been the inspiration for assignments on platforms such as Code.org and CodeHS and is a staple of the first weeks of CS1 BIBREF31.
We translated a Karel reader in Python and Java to 100 languages. The translated web-reader is free to use, and is hosted at [redacted]. At time of publication the reader has been public (without advertising) and has already been used by over 3,000 people from 50 countries. With permission from Eric Roberts, we first made an eBook version of his Karel reader and simplified the English used BIBREF32. The reader merges text and code in a seemless fashion. Then, for each language: we (a) translated code on each chapter using CodeInternational and (b) translated the reader text such that any reference to identifiers in the example code would use the same translations. In order to have text which is consistent with the corresponding code we heavily rely on the "Posterior identifier translation map" from CodeInternational's translations.
International Karel ::: Line-highlighting in any language
To make the Karel reader a fantastic learning experience we made it so that each code-snippet is runnable. When run, the program executes the code and highlights the corresponding lines as the program is run, regardless of the complexity of the program's control flow. In order to line-highlight we parse and compile the Python-Karel or Java-Karel programs using an engine written in JavaScript. Our line-highlighter builds upon the compiler described in Informatics Education using Nothing but a Browser BIBREF33.
Our Karel reader can run and line-highlight in any human-language that we translate into. For example our compiler can execute and line-highlight the command "moverse()" if the code is written in Spanish, "UTF8gbsn移动()" if the program is written in Chinese, "emshi()" if the program is written in Arabic, or "move()" if the Karel program is written in English. We chose to only transliterate commands for RTL scripts. Figure FIGREF27 shows three screenshots from the international Karel reader, though of course a PDF is unable to capture the ability of the reader to line-highlight code.
International Karel ::: Usage in Classrooms
We know of four classes where the internationalized Karel eReader has been used. These classes are around the world in: Istanbul, Bogota, Prague and [Redacted]. The eReader has been visited by >1k users in 3 months and both the English and the non-English version of the website have a high average session duration (9.7 min and 10.1 min respectively). Moreover, the tool has been used to translate the CSBridge curriculum website and assignments; HTML that mixes code and description (used by 400 students / year).
Discussion
Whether English should be used as the sole language of instruction has been debated. Case for code instruction in English: In order to program professionally, one will have to interact with keywords and libraries that are written for English speakers. English is the language of code, and it is practically required from anyone who wants to interact globally: correspond via email, read stack-overflow, watch educational videos, travel, etc. For classrooms where English is the main form of instruction, but students are not yet fluent, CodeInternational can be used to assist learning English and learning to program. Students could improve their English through coding, e.g. by placing English code against their L1 code, side-by-side. Case for instruction on transl(iter)ated code: On the other hand, people argue that it is beneficial for students to have much of their coding instruction in their L1 language, and doing so benefits access to CS. The primary reason for this intuitive: the cognitive-load of learning to program is already high. Moreover, if students learn coding using their L1 language and enjoy it, they become intrinsically motivated to learn English, knowing that English would broaden their access to learning material (learning earning a language, with no short-term motives, could be dull especially for young students). In this context CodeInternational can help students who are interacting with libraries in English. Perhaps more importantly our tool can help teachers rapidly develop localized content that builds off English content. The alternative: manual-translation of API, code-examples and website text, can be a huge barrier to translating material. Finally, our tool builds off GoogleTranslate, which is high accurate, but charges $1 per 50,000 characters. A free version would have a huge impact on utility.
We call for future work from tool experts, for extending popular code-editors (e.g. vim, XCode, Visual Studio, Eclipse) to integrate with our APIs for back-and-forth translation and side-by-side display. Optionally, integrating with automatic text-to-speech (e.g. BIBREF34) could allow students learn English pronunciation of code components. Moreover, one remaining feature in automatic human-translation of code is identifier consistency: if two identifiers have specific terms in common, eg getHeight, setHeight, we would like the translation of "height" to be consistent. While they are often consistent in our work, it is not enforced. Full consistency is hard, but not impossible, with modern neural machine translation.
Conclusion
We analyze millions of non-English Java programs on GitHub to inform our understanding of patterns of human-language and make some surprising observations. We build CodeInternational, an open-source tool which can translate Java or Python code between human languages. We evaluate our tool and use it to make an internationalized Karel eReader (with runnable code) in 100+ languages. Our tool is already being used in classrooms around the world, a trend we hope to continue supporting.
Conclusion ::: Acknowledgements
We would like to graciously thank XXX and YYY for contributing code to this translation project. We would also like to thank ZZZ for her inspiration. We also thank the WWW teachers for educating students around the world in their local language. | Non-English code is a large-scale phenomena., Transliteration is common in identifiers for all languages., Languages clusters into three distinct groups based on how speakers use identifiers/comments/transliteration., Non-latin script users write comments in their L1 script but write identifiers in English., Right-to-left (RTL) language scripts, such as Arabic, have no observed prevalence on GitHub identifiers |
5f0bb32d70ee8e4c4c59dc5c193bc0735fd751cc | 5f0bb32d70ee8e4c4c59dc5c193bc0735fd751cc_0 | Q: Where is the dataset from?
Text: Introduction
Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. need to support a large and constantly increasing number of services, over a wide variety of domains. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, Taskmaster-1 BIBREF1, M2M BIBREF2 and FRAMES BIBREF3.
However, building large-scale assistants, as opposed to dialogue systems managing a few APIs, poses a new set of challenges. Apart from the handling a very large variety of domains, such systems need to support heterogeneous services or APIs with possibly overlapping functionality. It should also offer an efficient way of supporting new APIs or services, while requiring little or no additional training data. Furthermore, to reduce maintenance workload and accommodate future growth, such assistants need to be robust to changes in the API's interface or addition of new slot values. Such changes shouldn't require collection of additional training data or retraining the model.
The Schema-Guided Dialogue State Tracking task at the Eighth Dialogue System Technology Challenge explores the aforementioned challenges in context of dialogue state tracking. In a task-oriented dialogue, the dialogue state is a summary of the entire conversation till the current turn. The dialogue state is used to invoke APIs with appropriate parameters as specified by the user over the dialogue history. It is also used by the assistant to generate the next actions to continue the dialogue. DST, therefore, is a core component of virtual assistants.
In this task, participants are required to develop innovative approaches to multi-domain dialogue state tracking, with a focus on data-efficient joint modeling across APIs and zero-shot generalization to new APIs. The task is based on the Schema-Guided Dialogue (SGD) dataset, which, to the best of our knowledge, is the largest publicly available corpus of annotated task-oriented dialogues. With over 16000 dialogues in the training set spanning 26 APIs over 16 domains, it exceeds the existing dialogue corpora in scale. SGD is the first dataset to allow multiple APIs with overlapping functionality within each domain. To adequately test generalization in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.
Related Work
Dialogue systems have constituted an active area of research for the past few decades. The advent of commercial personal assistants has provided further impetus to dialogue systems research. As virtual assistants incorporate diverse domains, zero-shot modeling BIBREF4, BIBREF5, BIBREF6, domain adaptation and transfer learning techniques BIBREF7, BIBREF8, BIBREF9 have been explored to support new domains in a data efficient manner.
Deep learning based approaches to DST have recently gained popularity. Some of these approaches estimate the dialogue state as a distribution over all possible slot-values BIBREF10, BIBREF11 or individually score all slot-value combinations BIBREF12, BIBREF13. Such approaches are, however, hard to scale to real-world virtual assistants, where the set of possible values for certain slots may be very large (date, time or restaurant name) and even dynamic (movie or event name). Other approaches utilizing a dynamic vocabulary of slot values BIBREF14, BIBREF15 still do not allow zero-shot generalization to new services and APIs BIBREF16, since they use schema elements i.e. intents and slots as fixed class labels.
Although such systems are capable of parsing the dialogue semantics in terms of these fixed intent labels, they lack understanding of the semantics of these labels. For instance, for the user utterance “I want to buy tickets for a movie.", such models can predict BuyMovieTickets as the correct intent based on patterns observed in the training data, but don't model either its association with the real world action of buying movie tickets, or its similarity to the action of buying concert or theatre tickets. Furthermore, because of their dependence on a fixed schema, such models are not robust to changes in the schema, and need to be retrained as new slots or intents are added. Use of domain-specific parameters renders some approaches unsuitable for zero-shot application.
Task
The primary task of this challenge is to develop multi-domain models for DST suitable for the scale and complexity of large scale virtual assistants. Supporting a wide variety of APIs or services with possibly overlapping functionality is an important requirement of such assistants. A common approach to do this involves defining a large master schema that lists all intents and slots supported by the assistant. Each service either adopts this master schema for the representation of the underlying data, or provides logic to translate between its own schema and the master schema.
The first approach involving adoption of the master schema is not ideal if a service wishes to integrate with multiple assistants, since each of the assistants could have their own master schema. The second approach involves definition of logic for translation between master schema and the service's schema, which increases the maintenance workload. Furthermore, it is difficult to develop a master schema catering to all possible use cases.
Additionally, while there are many similar concepts across services that can be jointly modeled, for example, the similarities in logic for querying or specifying the number of movie tickets, flight tickets or concert tickets, the master schema approach does not facilitate joint modeling of such concepts, unless an explicit mapping between them is manually defined. To address these limitations, we propose a schema-guided approach, which eliminates the need for a master schema.
Task ::: Schema-Guided Approach
Under the Schema-Guided approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF2 shows an example). The dialogue annotations are guided by the schema of the underlying service or API, as shown in Figure FIGREF3. In this example, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.
The natural language descriptions present in the schema are used to obtain a semantic representation of intents and slots. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. Using a single model facilitates representation and transfer of common knowledge across related concepts in different services. Since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. It is also robust to changes like the addition of new intents or slots to the service. In addition, the participants are allowed to use any external datasets or resources to bootstrap their models.
Dataset
As shown in Table TABREF9, our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.
Dataset ::: Data Representation
The dataset consists of conversations between a virtual assistant and a user. Each conversation can span multiple services across various domains. The dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.
In addition to the dialogues, for each service used in the dataset, a normalized representation of the interface exposed is provided as the schema. The schema contains details like the name of the service, the list of tasks supported by the service (intents) and the attributes of the entities used by the service (slots). The schema also contains natural language descriptions of the service, intents and slots which can be used for developing models which can condition their predictions on the schema.
Dataset ::: Comparison With Other Datasets
To reflect the constraints present in real-world services and APIs, we impose a few constraints on the data. Our dataset does not expose the set of all possible values for certain slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Such slots are specifically identified as non-categorical slots. In our evaluation sets, we ensured the presence of a significant number of values which were not previously seen in the training set to evaluate the performance of models on unseen values. Some slots like gender, number of people, etc. are classified as categorical and we provide a list of all possible values for them. However, these values are assumed to be not consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.
Real-world services can only be invoked with certain slot combinations: e.g. most restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. Although this constraint has no implications on the dialogue state tracking task, it restricts the possible conversational flows. Hence, to prevent flows not supported by actual services, we restrict services to be called with a list of slot combinations. The different service calls supported by a service are listed as intents with each intent specifying a list of required slots. The intent cannot be called without providing values for these required slots. Each intent also contains a list of optional slots with default values which can be overridden by the user.
In our dataset, we also have multiple services per domain with overlapping functionality. The intents across these services are similar but differ in terms of intent names, intent arguments, slot names, etc. In some cases, there is no one to one mapping between slot names (e.g., the num_stops and direct_only slots in Figure FIGREF3). With an ever increasing number of services and service providers, we believe that having multiple similar services per domain is much closer to the situation faced by virtual assistants than having one unique service per domain.
Dataset ::: Data Collection And Dataset Analysis
Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Using a dialogue simulator offers us multiple advantages. First, it ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase, thus creating a much diverse dataset. Second, simulated dialogues do not require manual annotation, as opposed to a Wizard-of-Oz setup BIBREF17, which is a common approach utilized in other datasets BIBREF0. It has been shown that such datasets suffer from substantial annotation errors BIBREF18. Thirdly, using a simulator greatly simplifies the data collection task and instructions as only paraphrasing is needed to achieve a natural dialogue. This is particularly important for creating a large dataset spanning multiple domains.
The 20 domains present across the train, dev and test datasets are listed in Table TABREF10, as are the details regarding which domains are present in each of the datasets. We create synthetic implementations of a total of 45 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are structured representations of dialogue semantics. We then use a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps briefly and then present analyses of the collected dataset.
All the services are implemented using a SQL engine. Since entity attributes are often correlated, we decided not to sample synthetic entities and instead relied on sampling entities from Freebase. The dialogue simulator interacts with the services to generate valid dialogue outlines. The simulator consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. At the start of the conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. The user agent generates dialogue acts to be output and combines them with values retrieved from the service/API to create the user actions. The system agent responds by following a similar procedure but also ensures that the generated flows are valid. We identified over 200 distinct scenarios for the training set each consisting up to 5 intents from various domains. Finally, the dialogue outlines generated are paraphrased into a natural conversation by crowd workers. We ensure that the annotations for the dialogue state and slots generated by the simulator are preserved and hence need no other annotation. We omit details for brevity: please refer to BIBREF19 for more details.
The entire dataset consists of over 16K dialogues spanning multiple domains. Overall statistics of the dataset and comparison with other datasets can be seen in Table TABREF9. Figure FIGREF8 shows the details of the distribution of dialogue lengths across single-domain and multi-domain dialogues. The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on average. Figure FIGREF8 shows the frequency of the different dialogue acts contained in the dataset. The dataset also contains a significant number of unseen domains/APIs in the dev and test sets. 77% of the dialogue turns in the test set and 45% of the turns in dev set contain at least one service not present in the training set. This facilitates the development of models which can generalize to new domains with very few labelled examples.
Submissions
The submissions from 25 teams included a variety of approaches and innovative solutions to specific problems posed by this dataset. For the workshop, we received submissions from 9 of these teams. In this section, we provide a short summary of the approaches followed by these teams. For effective generalization to unseen APIs, most teams used pre-trained encoders to encode schema element descriptions. Unless otherwise mentioned, a pre-trained BERT BIBREF20 encoder was used.
Team 2 BIBREF21: This was the only paper not using a pre-trained encoder, thus providing another important baseline. They rely on separate RNNs to encode service, slot and intent descriptions, and a BiRNN to encode dialogue history. Slot values are inferred using a TRADE-like encoder-decoder setup with a 3-way slot status gate, using the utterance encoding and schema element embeddings as context.
Team 5 BIBREF22: They predict values for categorical slots using a softmax over all candidate values. Non-categorical slot values are predicted by first predicting the status of each slot and then using a BiLSTM-CRF layer for BIO tagging BIBREF23. They also utilize a slot adoption tracker to predict if the values proposed by the system are accepted by the user.
Team 9 BIBREF24: This team submitted the winning entry, beating the second-placed team by around 9% in terms of joint goal accuracy. They use two separate models for categorical and non-categorical slots, and treat numerical categorical slots as non-categorical. They also use the entire dialogue history as input. They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy.
Team 12 BIBREF25: They use auxiliary binary features to connect previous intent to current intent, slots to dialogue history and source slots to target slots for slot transfer. Non-categorical slots are modeled similar to question answering by adding a null token and predicting spans for slot values. In-domain and cross-domain slot transfers are modeled as separate binary decisions by passing the slot descriptions as additional inputs.
Team 16 BIBREF26: They convert the tracking task for both categorical and non-categorical slots into a question answering task by feeding in the schema and the previous turns as the context. Similar to the baseline model, prediction is performed in two stages. The status of each slot (active/inactive/dontcare) is predicted using a classifier, following which the value is predicted as a span in the context. The same network is used for the different prediction tasks but the leading token and separator tokens used are different. They observe large gains by fine-tuning the schema embeddings and increasing the number of past turns fed as context.
Team 23 BIBREF27: They use a large scale multi-task model utilizing a single pass of a BERT based model for all tasks. Embeddings are calculated for the intents and slot value by using dialogue history, service and slot descriptions, possible values for categorical slots and are used for the various predictions.
Anonymous Team A BIBREF28: We could not identify which team submitted this model. They use multi-head attention twice to obtain domain-conditioned and slot-conditioned representations of the dialogue history. These representations are concatenated to obtain the full context which is used for the various predictions.
Anonymous Team B BIBREF29: We could not identify which team submitted this model. They use separate NLU systems for the sub tasks of predicting intents, requested slots, slot status, categorical and non-categorical slot values. They use a rule-based DST system with a few additions resulting in significant improvement. The improvements include adding dropout to intent prediction to account for train-test mismatch, using the entire predicted slot status distribution and separate binary predictions for slot transfer.
Anonymous Team C BIBREF30: They use a two-stage model with a candidate tracker for NLU and a candidate classifier to update the dialogue state. A slot tagger identifies slot values, which are used to update the candidate tracker. The candidate classifier uses the utterances and slot/intent descriptions to predict the final dialogue state. They also use an additional loss to penalize incorrect prediction on which slots appear in the current turn.
Evaluation
We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.
Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.
Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.
Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.
Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly.
In order to better reflect model performance in our task's specific setting, we introduce changes in the definitions of evaluation metrics from prior work. These are listed below:
[leftmargin=*]
Joint goal accuracy calculation: Traditionally, joint goal accuracy has been defined as the accuracy of predicting the dialogue state for all domains correctly. This is not practical in our setup, as the large number of services would result in near zero joint goal accuracy if the traditional definition is used. Furthermore, an incorrect dialogue state prediction for a service in the beginning of a dialogue degrades the joint goal accuracy for all future turns, even if the predictions for all other services are correct. Hence, joint goal accuracy calculated this way may not provide as much insight into the performance on different services. To address these concerns, only the services which are active or pertinent in a turn are included in the dialogue state. Thus, a service ceases to be a part of the dialogue state once its intent has been fulfilled.
Fuzzy matching for non-categorical slot values: The presence of non-categorical slots is another distinguishing feature of our dataset. These slots don't have a predefined vocabulary, and their values are predicted as a substring or span of the past user or system utterances. Drawing inspiration from the metrics used for slot tagging in spoken language understanding, we use a fuzzy matching score for non-categorical slots to reward partial matches with the ground truth.
Average goal accuracy: To calculate average goal accuracy, we do not take into account instances when both the ground truth and the predicted values for a slot are empty. Since for a given slot, a large number of utterances have an empty assignment, models can achieve a relatively high average goal accuracy just by predicting an empty assignment for each slot unless specifically excluded as in our evaluation.
Results
The test set contains a total of 21 services, among which 6 services are also present in the training set (seen services), whereas the remaining 15 are not present in the training set (unseen services). Table TABREF11 shows the evaluation metrics for the different submissions obtained on the test set. It also lists the performance of different submissions on seen and unseen services, helping evaluate the effectiveness in zero-shot settings. Team 9 achieved a very high joint goal accuracy of 86.53%, around 9% higher than the second-placed team. We observed the following trends across submissions:
For unseen services, performance on categorical slots is comparable to that on non-categorical slots. On the other hand, for seen services, the performance on categorical slots is better. This could be because there is less signal to differentiate between the different possible values for a categorical slot when they have not been observed in the training set.
The winning team's performance on seen services is similar to that of the other top teams. However, the winning team has a considerable edge on unseen services, outperforming the second team by around 12% in terms of joint goal accuracy. This margin was observed across both categorical and non-categorical slots.
Among unseen services, when looking at services belonging to unseen domains, the winning team was ahead of the other teams by at least 15%. The performance on categorical slots for unseen domains was about the same as that for seen services and domains. For other teams, there was at least a 20% drop in accuracy of categorical slots in unseen domains vs seen domains and services.
The joint goal accuracy of most of the models was worse by 15 percentage points on an average on the test set as compared to the dev set. This could be because the test set contains a much higher proportion of turns with at least one unseen services as compared to the dev set (77% and 45% respectively).
Summary
In this paper, we summarized the Schema-Guided Dialogue State Tracking task conducted at the Eighth Dialogue System Technology Challenge. This task challenged participants to develop dialogue state tracking models for large scale virtual assistants, with particular emphasis on joint modeling across different domains and APIs for data-efficiency and zero-shot generalization to new/unseen APIs. In order to encourage the development of such models, we constructed a new dataset spanning 16 domains (and 4 new domains in dev and test sets), defining multiple APIs with overlapping functionality for each of these domains. We advocated the use of schema-guided approach to building large-scale assistants, facilitating data-efficient joint modeling across domains while reducing maintenance workload.
The Schema-Guided Dialogue dataset released as part of this task is the first to highlight many of the aforementioned challenges. As a result, this task led to the development of several models utilizing the schema-guided approach for dialogue state tracking. The models extensively utilized pre-trained encoders like BERT BIBREF20, XLNet BIBREF31 etc. and employed data augmentation techniques to achieve effective zero-shot generalization to new APIs. The proposed schema-guided approach is fairly general and can be used to develop other dialogue system components such as language understanding, policy and response generation. We plan to explore them in future works.
Summary ::: Acknowledgements
The authors thank Guan-Lin Chao, Amir Fayazi and Maria Wang for their advice and assistance. | dialogue simulator |
a88a454ac1a1230263166fd824e5daebb91cb05a | a88a454ac1a1230263166fd824e5daebb91cb05a_0 | Q: What data augmentation techniques are used?
Text: Introduction
Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. need to support a large and constantly increasing number of services, over a wide variety of domains. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, Taskmaster-1 BIBREF1, M2M BIBREF2 and FRAMES BIBREF3.
However, building large-scale assistants, as opposed to dialogue systems managing a few APIs, poses a new set of challenges. Apart from the handling a very large variety of domains, such systems need to support heterogeneous services or APIs with possibly overlapping functionality. It should also offer an efficient way of supporting new APIs or services, while requiring little or no additional training data. Furthermore, to reduce maintenance workload and accommodate future growth, such assistants need to be robust to changes in the API's interface or addition of new slot values. Such changes shouldn't require collection of additional training data or retraining the model.
The Schema-Guided Dialogue State Tracking task at the Eighth Dialogue System Technology Challenge explores the aforementioned challenges in context of dialogue state tracking. In a task-oriented dialogue, the dialogue state is a summary of the entire conversation till the current turn. The dialogue state is used to invoke APIs with appropriate parameters as specified by the user over the dialogue history. It is also used by the assistant to generate the next actions to continue the dialogue. DST, therefore, is a core component of virtual assistants.
In this task, participants are required to develop innovative approaches to multi-domain dialogue state tracking, with a focus on data-efficient joint modeling across APIs and zero-shot generalization to new APIs. The task is based on the Schema-Guided Dialogue (SGD) dataset, which, to the best of our knowledge, is the largest publicly available corpus of annotated task-oriented dialogues. With over 16000 dialogues in the training set spanning 26 APIs over 16 domains, it exceeds the existing dialogue corpora in scale. SGD is the first dataset to allow multiple APIs with overlapping functionality within each domain. To adequately test generalization in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.
Related Work
Dialogue systems have constituted an active area of research for the past few decades. The advent of commercial personal assistants has provided further impetus to dialogue systems research. As virtual assistants incorporate diverse domains, zero-shot modeling BIBREF4, BIBREF5, BIBREF6, domain adaptation and transfer learning techniques BIBREF7, BIBREF8, BIBREF9 have been explored to support new domains in a data efficient manner.
Deep learning based approaches to DST have recently gained popularity. Some of these approaches estimate the dialogue state as a distribution over all possible slot-values BIBREF10, BIBREF11 or individually score all slot-value combinations BIBREF12, BIBREF13. Such approaches are, however, hard to scale to real-world virtual assistants, where the set of possible values for certain slots may be very large (date, time or restaurant name) and even dynamic (movie or event name). Other approaches utilizing a dynamic vocabulary of slot values BIBREF14, BIBREF15 still do not allow zero-shot generalization to new services and APIs BIBREF16, since they use schema elements i.e. intents and slots as fixed class labels.
Although such systems are capable of parsing the dialogue semantics in terms of these fixed intent labels, they lack understanding of the semantics of these labels. For instance, for the user utterance “I want to buy tickets for a movie.", such models can predict BuyMovieTickets as the correct intent based on patterns observed in the training data, but don't model either its association with the real world action of buying movie tickets, or its similarity to the action of buying concert or theatre tickets. Furthermore, because of their dependence on a fixed schema, such models are not robust to changes in the schema, and need to be retrained as new slots or intents are added. Use of domain-specific parameters renders some approaches unsuitable for zero-shot application.
Task
The primary task of this challenge is to develop multi-domain models for DST suitable for the scale and complexity of large scale virtual assistants. Supporting a wide variety of APIs or services with possibly overlapping functionality is an important requirement of such assistants. A common approach to do this involves defining a large master schema that lists all intents and slots supported by the assistant. Each service either adopts this master schema for the representation of the underlying data, or provides logic to translate between its own schema and the master schema.
The first approach involving adoption of the master schema is not ideal if a service wishes to integrate with multiple assistants, since each of the assistants could have their own master schema. The second approach involves definition of logic for translation between master schema and the service's schema, which increases the maintenance workload. Furthermore, it is difficult to develop a master schema catering to all possible use cases.
Additionally, while there are many similar concepts across services that can be jointly modeled, for example, the similarities in logic for querying or specifying the number of movie tickets, flight tickets or concert tickets, the master schema approach does not facilitate joint modeling of such concepts, unless an explicit mapping between them is manually defined. To address these limitations, we propose a schema-guided approach, which eliminates the need for a master schema.
Task ::: Schema-Guided Approach
Under the Schema-Guided approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF2 shows an example). The dialogue annotations are guided by the schema of the underlying service or API, as shown in Figure FIGREF3. In this example, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.
The natural language descriptions present in the schema are used to obtain a semantic representation of intents and slots. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. Using a single model facilitates representation and transfer of common knowledge across related concepts in different services. Since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. It is also robust to changes like the addition of new intents or slots to the service. In addition, the participants are allowed to use any external datasets or resources to bootstrap their models.
Dataset
As shown in Table TABREF9, our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.
Dataset ::: Data Representation
The dataset consists of conversations between a virtual assistant and a user. Each conversation can span multiple services across various domains. The dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.
In addition to the dialogues, for each service used in the dataset, a normalized representation of the interface exposed is provided as the schema. The schema contains details like the name of the service, the list of tasks supported by the service (intents) and the attributes of the entities used by the service (slots). The schema also contains natural language descriptions of the service, intents and slots which can be used for developing models which can condition their predictions on the schema.
Dataset ::: Comparison With Other Datasets
To reflect the constraints present in real-world services and APIs, we impose a few constraints on the data. Our dataset does not expose the set of all possible values for certain slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Such slots are specifically identified as non-categorical slots. In our evaluation sets, we ensured the presence of a significant number of values which were not previously seen in the training set to evaluate the performance of models on unseen values. Some slots like gender, number of people, etc. are classified as categorical and we provide a list of all possible values for them. However, these values are assumed to be not consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.
Real-world services can only be invoked with certain slot combinations: e.g. most restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. Although this constraint has no implications on the dialogue state tracking task, it restricts the possible conversational flows. Hence, to prevent flows not supported by actual services, we restrict services to be called with a list of slot combinations. The different service calls supported by a service are listed as intents with each intent specifying a list of required slots. The intent cannot be called without providing values for these required slots. Each intent also contains a list of optional slots with default values which can be overridden by the user.
In our dataset, we also have multiple services per domain with overlapping functionality. The intents across these services are similar but differ in terms of intent names, intent arguments, slot names, etc. In some cases, there is no one to one mapping between slot names (e.g., the num_stops and direct_only slots in Figure FIGREF3). With an ever increasing number of services and service providers, we believe that having multiple similar services per domain is much closer to the situation faced by virtual assistants than having one unique service per domain.
Dataset ::: Data Collection And Dataset Analysis
Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Using a dialogue simulator offers us multiple advantages. First, it ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase, thus creating a much diverse dataset. Second, simulated dialogues do not require manual annotation, as opposed to a Wizard-of-Oz setup BIBREF17, which is a common approach utilized in other datasets BIBREF0. It has been shown that such datasets suffer from substantial annotation errors BIBREF18. Thirdly, using a simulator greatly simplifies the data collection task and instructions as only paraphrasing is needed to achieve a natural dialogue. This is particularly important for creating a large dataset spanning multiple domains.
The 20 domains present across the train, dev and test datasets are listed in Table TABREF10, as are the details regarding which domains are present in each of the datasets. We create synthetic implementations of a total of 45 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are structured representations of dialogue semantics. We then use a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps briefly and then present analyses of the collected dataset.
All the services are implemented using a SQL engine. Since entity attributes are often correlated, we decided not to sample synthetic entities and instead relied on sampling entities from Freebase. The dialogue simulator interacts with the services to generate valid dialogue outlines. The simulator consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. At the start of the conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. The user agent generates dialogue acts to be output and combines them with values retrieved from the service/API to create the user actions. The system agent responds by following a similar procedure but also ensures that the generated flows are valid. We identified over 200 distinct scenarios for the training set each consisting up to 5 intents from various domains. Finally, the dialogue outlines generated are paraphrased into a natural conversation by crowd workers. We ensure that the annotations for the dialogue state and slots generated by the simulator are preserved and hence need no other annotation. We omit details for brevity: please refer to BIBREF19 for more details.
The entire dataset consists of over 16K dialogues spanning multiple domains. Overall statistics of the dataset and comparison with other datasets can be seen in Table TABREF9. Figure FIGREF8 shows the details of the distribution of dialogue lengths across single-domain and multi-domain dialogues. The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on average. Figure FIGREF8 shows the frequency of the different dialogue acts contained in the dataset. The dataset also contains a significant number of unseen domains/APIs in the dev and test sets. 77% of the dialogue turns in the test set and 45% of the turns in dev set contain at least one service not present in the training set. This facilitates the development of models which can generalize to new domains with very few labelled examples.
Submissions
The submissions from 25 teams included a variety of approaches and innovative solutions to specific problems posed by this dataset. For the workshop, we received submissions from 9 of these teams. In this section, we provide a short summary of the approaches followed by these teams. For effective generalization to unseen APIs, most teams used pre-trained encoders to encode schema element descriptions. Unless otherwise mentioned, a pre-trained BERT BIBREF20 encoder was used.
Team 2 BIBREF21: This was the only paper not using a pre-trained encoder, thus providing another important baseline. They rely on separate RNNs to encode service, slot and intent descriptions, and a BiRNN to encode dialogue history. Slot values are inferred using a TRADE-like encoder-decoder setup with a 3-way slot status gate, using the utterance encoding and schema element embeddings as context.
Team 5 BIBREF22: They predict values for categorical slots using a softmax over all candidate values. Non-categorical slot values are predicted by first predicting the status of each slot and then using a BiLSTM-CRF layer for BIO tagging BIBREF23. They also utilize a slot adoption tracker to predict if the values proposed by the system are accepted by the user.
Team 9 BIBREF24: This team submitted the winning entry, beating the second-placed team by around 9% in terms of joint goal accuracy. They use two separate models for categorical and non-categorical slots, and treat numerical categorical slots as non-categorical. They also use the entire dialogue history as input. They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy.
Team 12 BIBREF25: They use auxiliary binary features to connect previous intent to current intent, slots to dialogue history and source slots to target slots for slot transfer. Non-categorical slots are modeled similar to question answering by adding a null token and predicting spans for slot values. In-domain and cross-domain slot transfers are modeled as separate binary decisions by passing the slot descriptions as additional inputs.
Team 16 BIBREF26: They convert the tracking task for both categorical and non-categorical slots into a question answering task by feeding in the schema and the previous turns as the context. Similar to the baseline model, prediction is performed in two stages. The status of each slot (active/inactive/dontcare) is predicted using a classifier, following which the value is predicted as a span in the context. The same network is used for the different prediction tasks but the leading token and separator tokens used are different. They observe large gains by fine-tuning the schema embeddings and increasing the number of past turns fed as context.
Team 23 BIBREF27: They use a large scale multi-task model utilizing a single pass of a BERT based model for all tasks. Embeddings are calculated for the intents and slot value by using dialogue history, service and slot descriptions, possible values for categorical slots and are used for the various predictions.
Anonymous Team A BIBREF28: We could not identify which team submitted this model. They use multi-head attention twice to obtain domain-conditioned and slot-conditioned representations of the dialogue history. These representations are concatenated to obtain the full context which is used for the various predictions.
Anonymous Team B BIBREF29: We could not identify which team submitted this model. They use separate NLU systems for the sub tasks of predicting intents, requested slots, slot status, categorical and non-categorical slot values. They use a rule-based DST system with a few additions resulting in significant improvement. The improvements include adding dropout to intent prediction to account for train-test mismatch, using the entire predicted slot status distribution and separate binary predictions for slot transfer.
Anonymous Team C BIBREF30: They use a two-stage model with a candidate tracker for NLU and a candidate classifier to update the dialogue state. A slot tagger identifies slot values, which are used to update the candidate tracker. The candidate classifier uses the utterances and slot/intent descriptions to predict the final dialogue state. They also use an additional loss to penalize incorrect prediction on which slots appear in the current turn.
Evaluation
We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.
Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.
Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.
Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.
Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly.
In order to better reflect model performance in our task's specific setting, we introduce changes in the definitions of evaluation metrics from prior work. These are listed below:
[leftmargin=*]
Joint goal accuracy calculation: Traditionally, joint goal accuracy has been defined as the accuracy of predicting the dialogue state for all domains correctly. This is not practical in our setup, as the large number of services would result in near zero joint goal accuracy if the traditional definition is used. Furthermore, an incorrect dialogue state prediction for a service in the beginning of a dialogue degrades the joint goal accuracy for all future turns, even if the predictions for all other services are correct. Hence, joint goal accuracy calculated this way may not provide as much insight into the performance on different services. To address these concerns, only the services which are active or pertinent in a turn are included in the dialogue state. Thus, a service ceases to be a part of the dialogue state once its intent has been fulfilled.
Fuzzy matching for non-categorical slot values: The presence of non-categorical slots is another distinguishing feature of our dataset. These slots don't have a predefined vocabulary, and their values are predicted as a substring or span of the past user or system utterances. Drawing inspiration from the metrics used for slot tagging in spoken language understanding, we use a fuzzy matching score for non-categorical slots to reward partial matches with the ground truth.
Average goal accuracy: To calculate average goal accuracy, we do not take into account instances when both the ground truth and the predicted values for a slot are empty. Since for a given slot, a large number of utterances have an empty assignment, models can achieve a relatively high average goal accuracy just by predicting an empty assignment for each slot unless specifically excluded as in our evaluation.
Results
The test set contains a total of 21 services, among which 6 services are also present in the training set (seen services), whereas the remaining 15 are not present in the training set (unseen services). Table TABREF11 shows the evaluation metrics for the different submissions obtained on the test set. It also lists the performance of different submissions on seen and unseen services, helping evaluate the effectiveness in zero-shot settings. Team 9 achieved a very high joint goal accuracy of 86.53%, around 9% higher than the second-placed team. We observed the following trends across submissions:
For unseen services, performance on categorical slots is comparable to that on non-categorical slots. On the other hand, for seen services, the performance on categorical slots is better. This could be because there is less signal to differentiate between the different possible values for a categorical slot when they have not been observed in the training set.
The winning team's performance on seen services is similar to that of the other top teams. However, the winning team has a considerable edge on unseen services, outperforming the second team by around 12% in terms of joint goal accuracy. This margin was observed across both categorical and non-categorical slots.
Among unseen services, when looking at services belonging to unseen domains, the winning team was ahead of the other teams by at least 15%. The performance on categorical slots for unseen domains was about the same as that for seen services and domains. For other teams, there was at least a 20% drop in accuracy of categorical slots in unseen domains vs seen domains and services.
The joint goal accuracy of most of the models was worse by 15 percentage points on an average on the test set as compared to the dev set. This could be because the test set contains a much higher proportion of turns with at least one unseen services as compared to the dev set (77% and 45% respectively).
Summary
In this paper, we summarized the Schema-Guided Dialogue State Tracking task conducted at the Eighth Dialogue System Technology Challenge. This task challenged participants to develop dialogue state tracking models for large scale virtual assistants, with particular emphasis on joint modeling across different domains and APIs for data-efficiency and zero-shot generalization to new/unseen APIs. In order to encourage the development of such models, we constructed a new dataset spanning 16 domains (and 4 new domains in dev and test sets), defining multiple APIs with overlapping functionality for each of these domains. We advocated the use of schema-guided approach to building large-scale assistants, facilitating data-efficient joint modeling across domains while reducing maintenance workload.
The Schema-Guided Dialogue dataset released as part of this task is the first to highlight many of the aforementioned challenges. As a result, this task led to the development of several models utilizing the schema-guided approach for dialogue state tracking. The models extensively utilized pre-trained encoders like BERT BIBREF20, XLNet BIBREF31 etc. and employed data augmentation techniques to achieve effective zero-shot generalization to new APIs. The proposed schema-guided approach is fairly general and can be used to develop other dialogue system components such as language understanding, policy and response generation. We plan to explore them in future works.
Summary ::: Acknowledgements
The authors thank Guan-Lin Chao, Amir Fayazi and Maria Wang for their advice and assistance. | back translation between English and Chinese |
bbaf7cbae88c085faa6bbe3319e4943362fe1ad4 | bbaf7cbae88c085faa6bbe3319e4943362fe1ad4_0 | Q: Do all teams use neural networks for their models?
Text: Introduction
Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. need to support a large and constantly increasing number of services, over a wide variety of domains. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, Taskmaster-1 BIBREF1, M2M BIBREF2 and FRAMES BIBREF3.
However, building large-scale assistants, as opposed to dialogue systems managing a few APIs, poses a new set of challenges. Apart from the handling a very large variety of domains, such systems need to support heterogeneous services or APIs with possibly overlapping functionality. It should also offer an efficient way of supporting new APIs or services, while requiring little or no additional training data. Furthermore, to reduce maintenance workload and accommodate future growth, such assistants need to be robust to changes in the API's interface or addition of new slot values. Such changes shouldn't require collection of additional training data or retraining the model.
The Schema-Guided Dialogue State Tracking task at the Eighth Dialogue System Technology Challenge explores the aforementioned challenges in context of dialogue state tracking. In a task-oriented dialogue, the dialogue state is a summary of the entire conversation till the current turn. The dialogue state is used to invoke APIs with appropriate parameters as specified by the user over the dialogue history. It is also used by the assistant to generate the next actions to continue the dialogue. DST, therefore, is a core component of virtual assistants.
In this task, participants are required to develop innovative approaches to multi-domain dialogue state tracking, with a focus on data-efficient joint modeling across APIs and zero-shot generalization to new APIs. The task is based on the Schema-Guided Dialogue (SGD) dataset, which, to the best of our knowledge, is the largest publicly available corpus of annotated task-oriented dialogues. With over 16000 dialogues in the training set spanning 26 APIs over 16 domains, it exceeds the existing dialogue corpora in scale. SGD is the first dataset to allow multiple APIs with overlapping functionality within each domain. To adequately test generalization in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.
Related Work
Dialogue systems have constituted an active area of research for the past few decades. The advent of commercial personal assistants has provided further impetus to dialogue systems research. As virtual assistants incorporate diverse domains, zero-shot modeling BIBREF4, BIBREF5, BIBREF6, domain adaptation and transfer learning techniques BIBREF7, BIBREF8, BIBREF9 have been explored to support new domains in a data efficient manner.
Deep learning based approaches to DST have recently gained popularity. Some of these approaches estimate the dialogue state as a distribution over all possible slot-values BIBREF10, BIBREF11 or individually score all slot-value combinations BIBREF12, BIBREF13. Such approaches are, however, hard to scale to real-world virtual assistants, where the set of possible values for certain slots may be very large (date, time or restaurant name) and even dynamic (movie or event name). Other approaches utilizing a dynamic vocabulary of slot values BIBREF14, BIBREF15 still do not allow zero-shot generalization to new services and APIs BIBREF16, since they use schema elements i.e. intents and slots as fixed class labels.
Although such systems are capable of parsing the dialogue semantics in terms of these fixed intent labels, they lack understanding of the semantics of these labels. For instance, for the user utterance “I want to buy tickets for a movie.", such models can predict BuyMovieTickets as the correct intent based on patterns observed in the training data, but don't model either its association with the real world action of buying movie tickets, or its similarity to the action of buying concert or theatre tickets. Furthermore, because of their dependence on a fixed schema, such models are not robust to changes in the schema, and need to be retrained as new slots or intents are added. Use of domain-specific parameters renders some approaches unsuitable for zero-shot application.
Task
The primary task of this challenge is to develop multi-domain models for DST suitable for the scale and complexity of large scale virtual assistants. Supporting a wide variety of APIs or services with possibly overlapping functionality is an important requirement of such assistants. A common approach to do this involves defining a large master schema that lists all intents and slots supported by the assistant. Each service either adopts this master schema for the representation of the underlying data, or provides logic to translate between its own schema and the master schema.
The first approach involving adoption of the master schema is not ideal if a service wishes to integrate with multiple assistants, since each of the assistants could have their own master schema. The second approach involves definition of logic for translation between master schema and the service's schema, which increases the maintenance workload. Furthermore, it is difficult to develop a master schema catering to all possible use cases.
Additionally, while there are many similar concepts across services that can be jointly modeled, for example, the similarities in logic for querying or specifying the number of movie tickets, flight tickets or concert tickets, the master schema approach does not facilitate joint modeling of such concepts, unless an explicit mapping between them is manually defined. To address these limitations, we propose a schema-guided approach, which eliminates the need for a master schema.
Task ::: Schema-Guided Approach
Under the Schema-Guided approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF2 shows an example). The dialogue annotations are guided by the schema of the underlying service or API, as shown in Figure FIGREF3. In this example, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.
The natural language descriptions present in the schema are used to obtain a semantic representation of intents and slots. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. Using a single model facilitates representation and transfer of common knowledge across related concepts in different services. Since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. It is also robust to changes like the addition of new intents or slots to the service. In addition, the participants are allowed to use any external datasets or resources to bootstrap their models.
Dataset
As shown in Table TABREF9, our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.
Dataset ::: Data Representation
The dataset consists of conversations between a virtual assistant and a user. Each conversation can span multiple services across various domains. The dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.
In addition to the dialogues, for each service used in the dataset, a normalized representation of the interface exposed is provided as the schema. The schema contains details like the name of the service, the list of tasks supported by the service (intents) and the attributes of the entities used by the service (slots). The schema also contains natural language descriptions of the service, intents and slots which can be used for developing models which can condition their predictions on the schema.
Dataset ::: Comparison With Other Datasets
To reflect the constraints present in real-world services and APIs, we impose a few constraints on the data. Our dataset does not expose the set of all possible values for certain slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Such slots are specifically identified as non-categorical slots. In our evaluation sets, we ensured the presence of a significant number of values which were not previously seen in the training set to evaluate the performance of models on unseen values. Some slots like gender, number of people, etc. are classified as categorical and we provide a list of all possible values for them. However, these values are assumed to be not consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.
Real-world services can only be invoked with certain slot combinations: e.g. most restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. Although this constraint has no implications on the dialogue state tracking task, it restricts the possible conversational flows. Hence, to prevent flows not supported by actual services, we restrict services to be called with a list of slot combinations. The different service calls supported by a service are listed as intents with each intent specifying a list of required slots. The intent cannot be called without providing values for these required slots. Each intent also contains a list of optional slots with default values which can be overridden by the user.
In our dataset, we also have multiple services per domain with overlapping functionality. The intents across these services are similar but differ in terms of intent names, intent arguments, slot names, etc. In some cases, there is no one to one mapping between slot names (e.g., the num_stops and direct_only slots in Figure FIGREF3). With an ever increasing number of services and service providers, we believe that having multiple similar services per domain is much closer to the situation faced by virtual assistants than having one unique service per domain.
Dataset ::: Data Collection And Dataset Analysis
Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Using a dialogue simulator offers us multiple advantages. First, it ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase, thus creating a much diverse dataset. Second, simulated dialogues do not require manual annotation, as opposed to a Wizard-of-Oz setup BIBREF17, which is a common approach utilized in other datasets BIBREF0. It has been shown that such datasets suffer from substantial annotation errors BIBREF18. Thirdly, using a simulator greatly simplifies the data collection task and instructions as only paraphrasing is needed to achieve a natural dialogue. This is particularly important for creating a large dataset spanning multiple domains.
The 20 domains present across the train, dev and test datasets are listed in Table TABREF10, as are the details regarding which domains are present in each of the datasets. We create synthetic implementations of a total of 45 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are structured representations of dialogue semantics. We then use a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps briefly and then present analyses of the collected dataset.
All the services are implemented using a SQL engine. Since entity attributes are often correlated, we decided not to sample synthetic entities and instead relied on sampling entities from Freebase. The dialogue simulator interacts with the services to generate valid dialogue outlines. The simulator consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. At the start of the conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. The user agent generates dialogue acts to be output and combines them with values retrieved from the service/API to create the user actions. The system agent responds by following a similar procedure but also ensures that the generated flows are valid. We identified over 200 distinct scenarios for the training set each consisting up to 5 intents from various domains. Finally, the dialogue outlines generated are paraphrased into a natural conversation by crowd workers. We ensure that the annotations for the dialogue state and slots generated by the simulator are preserved and hence need no other annotation. We omit details for brevity: please refer to BIBREF19 for more details.
The entire dataset consists of over 16K dialogues spanning multiple domains. Overall statistics of the dataset and comparison with other datasets can be seen in Table TABREF9. Figure FIGREF8 shows the details of the distribution of dialogue lengths across single-domain and multi-domain dialogues. The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on average. Figure FIGREF8 shows the frequency of the different dialogue acts contained in the dataset. The dataset also contains a significant number of unseen domains/APIs in the dev and test sets. 77% of the dialogue turns in the test set and 45% of the turns in dev set contain at least one service not present in the training set. This facilitates the development of models which can generalize to new domains with very few labelled examples.
Submissions
The submissions from 25 teams included a variety of approaches and innovative solutions to specific problems posed by this dataset. For the workshop, we received submissions from 9 of these teams. In this section, we provide a short summary of the approaches followed by these teams. For effective generalization to unseen APIs, most teams used pre-trained encoders to encode schema element descriptions. Unless otherwise mentioned, a pre-trained BERT BIBREF20 encoder was used.
Team 2 BIBREF21: This was the only paper not using a pre-trained encoder, thus providing another important baseline. They rely on separate RNNs to encode service, slot and intent descriptions, and a BiRNN to encode dialogue history. Slot values are inferred using a TRADE-like encoder-decoder setup with a 3-way slot status gate, using the utterance encoding and schema element embeddings as context.
Team 5 BIBREF22: They predict values for categorical slots using a softmax over all candidate values. Non-categorical slot values are predicted by first predicting the status of each slot and then using a BiLSTM-CRF layer for BIO tagging BIBREF23. They also utilize a slot adoption tracker to predict if the values proposed by the system are accepted by the user.
Team 9 BIBREF24: This team submitted the winning entry, beating the second-placed team by around 9% in terms of joint goal accuracy. They use two separate models for categorical and non-categorical slots, and treat numerical categorical slots as non-categorical. They also use the entire dialogue history as input. They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy.
Team 12 BIBREF25: They use auxiliary binary features to connect previous intent to current intent, slots to dialogue history and source slots to target slots for slot transfer. Non-categorical slots are modeled similar to question answering by adding a null token and predicting spans for slot values. In-domain and cross-domain slot transfers are modeled as separate binary decisions by passing the slot descriptions as additional inputs.
Team 16 BIBREF26: They convert the tracking task for both categorical and non-categorical slots into a question answering task by feeding in the schema and the previous turns as the context. Similar to the baseline model, prediction is performed in two stages. The status of each slot (active/inactive/dontcare) is predicted using a classifier, following which the value is predicted as a span in the context. The same network is used for the different prediction tasks but the leading token and separator tokens used are different. They observe large gains by fine-tuning the schema embeddings and increasing the number of past turns fed as context.
Team 23 BIBREF27: They use a large scale multi-task model utilizing a single pass of a BERT based model for all tasks. Embeddings are calculated for the intents and slot value by using dialogue history, service and slot descriptions, possible values for categorical slots and are used for the various predictions.
Anonymous Team A BIBREF28: We could not identify which team submitted this model. They use multi-head attention twice to obtain domain-conditioned and slot-conditioned representations of the dialogue history. These representations are concatenated to obtain the full context which is used for the various predictions.
Anonymous Team B BIBREF29: We could not identify which team submitted this model. They use separate NLU systems for the sub tasks of predicting intents, requested slots, slot status, categorical and non-categorical slot values. They use a rule-based DST system with a few additions resulting in significant improvement. The improvements include adding dropout to intent prediction to account for train-test mismatch, using the entire predicted slot status distribution and separate binary predictions for slot transfer.
Anonymous Team C BIBREF30: They use a two-stage model with a candidate tracker for NLU and a candidate classifier to update the dialogue state. A slot tagger identifies slot values, which are used to update the candidate tracker. The candidate classifier uses the utterances and slot/intent descriptions to predict the final dialogue state. They also use an additional loss to penalize incorrect prediction on which slots appear in the current turn.
Evaluation
We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.
Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.
Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.
Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.
Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly.
In order to better reflect model performance in our task's specific setting, we introduce changes in the definitions of evaluation metrics from prior work. These are listed below:
[leftmargin=*]
Joint goal accuracy calculation: Traditionally, joint goal accuracy has been defined as the accuracy of predicting the dialogue state for all domains correctly. This is not practical in our setup, as the large number of services would result in near zero joint goal accuracy if the traditional definition is used. Furthermore, an incorrect dialogue state prediction for a service in the beginning of a dialogue degrades the joint goal accuracy for all future turns, even if the predictions for all other services are correct. Hence, joint goal accuracy calculated this way may not provide as much insight into the performance on different services. To address these concerns, only the services which are active or pertinent in a turn are included in the dialogue state. Thus, a service ceases to be a part of the dialogue state once its intent has been fulfilled.
Fuzzy matching for non-categorical slot values: The presence of non-categorical slots is another distinguishing feature of our dataset. These slots don't have a predefined vocabulary, and their values are predicted as a substring or span of the past user or system utterances. Drawing inspiration from the metrics used for slot tagging in spoken language understanding, we use a fuzzy matching score for non-categorical slots to reward partial matches with the ground truth.
Average goal accuracy: To calculate average goal accuracy, we do not take into account instances when both the ground truth and the predicted values for a slot are empty. Since for a given slot, a large number of utterances have an empty assignment, models can achieve a relatively high average goal accuracy just by predicting an empty assignment for each slot unless specifically excluded as in our evaluation.
Results
The test set contains a total of 21 services, among which 6 services are also present in the training set (seen services), whereas the remaining 15 are not present in the training set (unseen services). Table TABREF11 shows the evaluation metrics for the different submissions obtained on the test set. It also lists the performance of different submissions on seen and unseen services, helping evaluate the effectiveness in zero-shot settings. Team 9 achieved a very high joint goal accuracy of 86.53%, around 9% higher than the second-placed team. We observed the following trends across submissions:
For unseen services, performance on categorical slots is comparable to that on non-categorical slots. On the other hand, for seen services, the performance on categorical slots is better. This could be because there is less signal to differentiate between the different possible values for a categorical slot when they have not been observed in the training set.
The winning team's performance on seen services is similar to that of the other top teams. However, the winning team has a considerable edge on unseen services, outperforming the second team by around 12% in terms of joint goal accuracy. This margin was observed across both categorical and non-categorical slots.
Among unseen services, when looking at services belonging to unseen domains, the winning team was ahead of the other teams by at least 15%. The performance on categorical slots for unseen domains was about the same as that for seen services and domains. For other teams, there was at least a 20% drop in accuracy of categorical slots in unseen domains vs seen domains and services.
The joint goal accuracy of most of the models was worse by 15 percentage points on an average on the test set as compared to the dev set. This could be because the test set contains a much higher proportion of turns with at least one unseen services as compared to the dev set (77% and 45% respectively).
Summary
In this paper, we summarized the Schema-Guided Dialogue State Tracking task conducted at the Eighth Dialogue System Technology Challenge. This task challenged participants to develop dialogue state tracking models for large scale virtual assistants, with particular emphasis on joint modeling across different domains and APIs for data-efficiency and zero-shot generalization to new/unseen APIs. In order to encourage the development of such models, we constructed a new dataset spanning 16 domains (and 4 new domains in dev and test sets), defining multiple APIs with overlapping functionality for each of these domains. We advocated the use of schema-guided approach to building large-scale assistants, facilitating data-efficient joint modeling across domains while reducing maintenance workload.
The Schema-Guided Dialogue dataset released as part of this task is the first to highlight many of the aforementioned challenges. As a result, this task led to the development of several models utilizing the schema-guided approach for dialogue state tracking. The models extensively utilized pre-trained encoders like BERT BIBREF20, XLNet BIBREF31 etc. and employed data augmentation techniques to achieve effective zero-shot generalization to new APIs. The proposed schema-guided approach is fairly general and can be used to develop other dialogue system components such as language understanding, policy and response generation. We plan to explore them in future works.
Summary ::: Acknowledgements
The authors thank Guan-Lin Chao, Amir Fayazi and Maria Wang for their advice and assistance. | Unanswerable |
a6b99b7f32fb79a7db996fef76e9d83def05c64b | a6b99b7f32fb79a7db996fef76e9d83def05c64b_0 | Q: How are the models evaluated?
Text: Introduction
Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. need to support a large and constantly increasing number of services, over a wide variety of domains. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, Taskmaster-1 BIBREF1, M2M BIBREF2 and FRAMES BIBREF3.
However, building large-scale assistants, as opposed to dialogue systems managing a few APIs, poses a new set of challenges. Apart from the handling a very large variety of domains, such systems need to support heterogeneous services or APIs with possibly overlapping functionality. It should also offer an efficient way of supporting new APIs or services, while requiring little or no additional training data. Furthermore, to reduce maintenance workload and accommodate future growth, such assistants need to be robust to changes in the API's interface or addition of new slot values. Such changes shouldn't require collection of additional training data or retraining the model.
The Schema-Guided Dialogue State Tracking task at the Eighth Dialogue System Technology Challenge explores the aforementioned challenges in context of dialogue state tracking. In a task-oriented dialogue, the dialogue state is a summary of the entire conversation till the current turn. The dialogue state is used to invoke APIs with appropriate parameters as specified by the user over the dialogue history. It is also used by the assistant to generate the next actions to continue the dialogue. DST, therefore, is a core component of virtual assistants.
In this task, participants are required to develop innovative approaches to multi-domain dialogue state tracking, with a focus on data-efficient joint modeling across APIs and zero-shot generalization to new APIs. The task is based on the Schema-Guided Dialogue (SGD) dataset, which, to the best of our knowledge, is the largest publicly available corpus of annotated task-oriented dialogues. With over 16000 dialogues in the training set spanning 26 APIs over 16 domains, it exceeds the existing dialogue corpora in scale. SGD is the first dataset to allow multiple APIs with overlapping functionality within each domain. To adequately test generalization in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.
Related Work
Dialogue systems have constituted an active area of research for the past few decades. The advent of commercial personal assistants has provided further impetus to dialogue systems research. As virtual assistants incorporate diverse domains, zero-shot modeling BIBREF4, BIBREF5, BIBREF6, domain adaptation and transfer learning techniques BIBREF7, BIBREF8, BIBREF9 have been explored to support new domains in a data efficient manner.
Deep learning based approaches to DST have recently gained popularity. Some of these approaches estimate the dialogue state as a distribution over all possible slot-values BIBREF10, BIBREF11 or individually score all slot-value combinations BIBREF12, BIBREF13. Such approaches are, however, hard to scale to real-world virtual assistants, where the set of possible values for certain slots may be very large (date, time or restaurant name) and even dynamic (movie or event name). Other approaches utilizing a dynamic vocabulary of slot values BIBREF14, BIBREF15 still do not allow zero-shot generalization to new services and APIs BIBREF16, since they use schema elements i.e. intents and slots as fixed class labels.
Although such systems are capable of parsing the dialogue semantics in terms of these fixed intent labels, they lack understanding of the semantics of these labels. For instance, for the user utterance “I want to buy tickets for a movie.", such models can predict BuyMovieTickets as the correct intent based on patterns observed in the training data, but don't model either its association with the real world action of buying movie tickets, or its similarity to the action of buying concert or theatre tickets. Furthermore, because of their dependence on a fixed schema, such models are not robust to changes in the schema, and need to be retrained as new slots or intents are added. Use of domain-specific parameters renders some approaches unsuitable for zero-shot application.
Task
The primary task of this challenge is to develop multi-domain models for DST suitable for the scale and complexity of large scale virtual assistants. Supporting a wide variety of APIs or services with possibly overlapping functionality is an important requirement of such assistants. A common approach to do this involves defining a large master schema that lists all intents and slots supported by the assistant. Each service either adopts this master schema for the representation of the underlying data, or provides logic to translate between its own schema and the master schema.
The first approach involving adoption of the master schema is not ideal if a service wishes to integrate with multiple assistants, since each of the assistants could have their own master schema. The second approach involves definition of logic for translation between master schema and the service's schema, which increases the maintenance workload. Furthermore, it is difficult to develop a master schema catering to all possible use cases.
Additionally, while there are many similar concepts across services that can be jointly modeled, for example, the similarities in logic for querying or specifying the number of movie tickets, flight tickets or concert tickets, the master schema approach does not facilitate joint modeling of such concepts, unless an explicit mapping between them is manually defined. To address these limitations, we propose a schema-guided approach, which eliminates the need for a master schema.
Task ::: Schema-Guided Approach
Under the Schema-Guided approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF2 shows an example). The dialogue annotations are guided by the schema of the underlying service or API, as shown in Figure FIGREF3. In this example, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.
The natural language descriptions present in the schema are used to obtain a semantic representation of intents and slots. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. Using a single model facilitates representation and transfer of common knowledge across related concepts in different services. Since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. It is also robust to changes like the addition of new intents or slots to the service. In addition, the participants are allowed to use any external datasets or resources to bootstrap their models.
Dataset
As shown in Table TABREF9, our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.
Dataset ::: Data Representation
The dataset consists of conversations between a virtual assistant and a user. Each conversation can span multiple services across various domains. The dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.
In addition to the dialogues, for each service used in the dataset, a normalized representation of the interface exposed is provided as the schema. The schema contains details like the name of the service, the list of tasks supported by the service (intents) and the attributes of the entities used by the service (slots). The schema also contains natural language descriptions of the service, intents and slots which can be used for developing models which can condition their predictions on the schema.
Dataset ::: Comparison With Other Datasets
To reflect the constraints present in real-world services and APIs, we impose a few constraints on the data. Our dataset does not expose the set of all possible values for certain slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Such slots are specifically identified as non-categorical slots. In our evaluation sets, we ensured the presence of a significant number of values which were not previously seen in the training set to evaluate the performance of models on unseen values. Some slots like gender, number of people, etc. are classified as categorical and we provide a list of all possible values for them. However, these values are assumed to be not consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.
Real-world services can only be invoked with certain slot combinations: e.g. most restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. Although this constraint has no implications on the dialogue state tracking task, it restricts the possible conversational flows. Hence, to prevent flows not supported by actual services, we restrict services to be called with a list of slot combinations. The different service calls supported by a service are listed as intents with each intent specifying a list of required slots. The intent cannot be called without providing values for these required slots. Each intent also contains a list of optional slots with default values which can be overridden by the user.
In our dataset, we also have multiple services per domain with overlapping functionality. The intents across these services are similar but differ in terms of intent names, intent arguments, slot names, etc. In some cases, there is no one to one mapping between slot names (e.g., the num_stops and direct_only slots in Figure FIGREF3). With an ever increasing number of services and service providers, we believe that having multiple similar services per domain is much closer to the situation faced by virtual assistants than having one unique service per domain.
Dataset ::: Data Collection And Dataset Analysis
Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Using a dialogue simulator offers us multiple advantages. First, it ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase, thus creating a much diverse dataset. Second, simulated dialogues do not require manual annotation, as opposed to a Wizard-of-Oz setup BIBREF17, which is a common approach utilized in other datasets BIBREF0. It has been shown that such datasets suffer from substantial annotation errors BIBREF18. Thirdly, using a simulator greatly simplifies the data collection task and instructions as only paraphrasing is needed to achieve a natural dialogue. This is particularly important for creating a large dataset spanning multiple domains.
The 20 domains present across the train, dev and test datasets are listed in Table TABREF10, as are the details regarding which domains are present in each of the datasets. We create synthetic implementations of a total of 45 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are structured representations of dialogue semantics. We then use a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps briefly and then present analyses of the collected dataset.
All the services are implemented using a SQL engine. Since entity attributes are often correlated, we decided not to sample synthetic entities and instead relied on sampling entities from Freebase. The dialogue simulator interacts with the services to generate valid dialogue outlines. The simulator consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. At the start of the conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. The user agent generates dialogue acts to be output and combines them with values retrieved from the service/API to create the user actions. The system agent responds by following a similar procedure but also ensures that the generated flows are valid. We identified over 200 distinct scenarios for the training set each consisting up to 5 intents from various domains. Finally, the dialogue outlines generated are paraphrased into a natural conversation by crowd workers. We ensure that the annotations for the dialogue state and slots generated by the simulator are preserved and hence need no other annotation. We omit details for brevity: please refer to BIBREF19 for more details.
The entire dataset consists of over 16K dialogues spanning multiple domains. Overall statistics of the dataset and comparison with other datasets can be seen in Table TABREF9. Figure FIGREF8 shows the details of the distribution of dialogue lengths across single-domain and multi-domain dialogues. The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on average. Figure FIGREF8 shows the frequency of the different dialogue acts contained in the dataset. The dataset also contains a significant number of unseen domains/APIs in the dev and test sets. 77% of the dialogue turns in the test set and 45% of the turns in dev set contain at least one service not present in the training set. This facilitates the development of models which can generalize to new domains with very few labelled examples.
Submissions
The submissions from 25 teams included a variety of approaches and innovative solutions to specific problems posed by this dataset. For the workshop, we received submissions from 9 of these teams. In this section, we provide a short summary of the approaches followed by these teams. For effective generalization to unseen APIs, most teams used pre-trained encoders to encode schema element descriptions. Unless otherwise mentioned, a pre-trained BERT BIBREF20 encoder was used.
Team 2 BIBREF21: This was the only paper not using a pre-trained encoder, thus providing another important baseline. They rely on separate RNNs to encode service, slot and intent descriptions, and a BiRNN to encode dialogue history. Slot values are inferred using a TRADE-like encoder-decoder setup with a 3-way slot status gate, using the utterance encoding and schema element embeddings as context.
Team 5 BIBREF22: They predict values for categorical slots using a softmax over all candidate values. Non-categorical slot values are predicted by first predicting the status of each slot and then using a BiLSTM-CRF layer for BIO tagging BIBREF23. They also utilize a slot adoption tracker to predict if the values proposed by the system are accepted by the user.
Team 9 BIBREF24: This team submitted the winning entry, beating the second-placed team by around 9% in terms of joint goal accuracy. They use two separate models for categorical and non-categorical slots, and treat numerical categorical slots as non-categorical. They also use the entire dialogue history as input. They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy.
Team 12 BIBREF25: They use auxiliary binary features to connect previous intent to current intent, slots to dialogue history and source slots to target slots for slot transfer. Non-categorical slots are modeled similar to question answering by adding a null token and predicting spans for slot values. In-domain and cross-domain slot transfers are modeled as separate binary decisions by passing the slot descriptions as additional inputs.
Team 16 BIBREF26: They convert the tracking task for both categorical and non-categorical slots into a question answering task by feeding in the schema and the previous turns as the context. Similar to the baseline model, prediction is performed in two stages. The status of each slot (active/inactive/dontcare) is predicted using a classifier, following which the value is predicted as a span in the context. The same network is used for the different prediction tasks but the leading token and separator tokens used are different. They observe large gains by fine-tuning the schema embeddings and increasing the number of past turns fed as context.
Team 23 BIBREF27: They use a large scale multi-task model utilizing a single pass of a BERT based model for all tasks. Embeddings are calculated for the intents and slot value by using dialogue history, service and slot descriptions, possible values for categorical slots and are used for the various predictions.
Anonymous Team A BIBREF28: We could not identify which team submitted this model. They use multi-head attention twice to obtain domain-conditioned and slot-conditioned representations of the dialogue history. These representations are concatenated to obtain the full context which is used for the various predictions.
Anonymous Team B BIBREF29: We could not identify which team submitted this model. They use separate NLU systems for the sub tasks of predicting intents, requested slots, slot status, categorical and non-categorical slot values. They use a rule-based DST system with a few additions resulting in significant improvement. The improvements include adding dropout to intent prediction to account for train-test mismatch, using the entire predicted slot status distribution and separate binary predictions for slot transfer.
Anonymous Team C BIBREF30: They use a two-stage model with a candidate tracker for NLU and a candidate classifier to update the dialogue state. A slot tagger identifies slot values, which are used to update the candidate tracker. The candidate classifier uses the utterances and slot/intent descriptions to predict the final dialogue state. They also use an additional loss to penalize incorrect prediction on which slots appear in the current turn.
Evaluation
We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.
Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.
Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.
Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.
Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly.
In order to better reflect model performance in our task's specific setting, we introduce changes in the definitions of evaluation metrics from prior work. These are listed below:
[leftmargin=*]
Joint goal accuracy calculation: Traditionally, joint goal accuracy has been defined as the accuracy of predicting the dialogue state for all domains correctly. This is not practical in our setup, as the large number of services would result in near zero joint goal accuracy if the traditional definition is used. Furthermore, an incorrect dialogue state prediction for a service in the beginning of a dialogue degrades the joint goal accuracy for all future turns, even if the predictions for all other services are correct. Hence, joint goal accuracy calculated this way may not provide as much insight into the performance on different services. To address these concerns, only the services which are active or pertinent in a turn are included in the dialogue state. Thus, a service ceases to be a part of the dialogue state once its intent has been fulfilled.
Fuzzy matching for non-categorical slot values: The presence of non-categorical slots is another distinguishing feature of our dataset. These slots don't have a predefined vocabulary, and their values are predicted as a substring or span of the past user or system utterances. Drawing inspiration from the metrics used for slot tagging in spoken language understanding, we use a fuzzy matching score for non-categorical slots to reward partial matches with the ground truth.
Average goal accuracy: To calculate average goal accuracy, we do not take into account instances when both the ground truth and the predicted values for a slot are empty. Since for a given slot, a large number of utterances have an empty assignment, models can achieve a relatively high average goal accuracy just by predicting an empty assignment for each slot unless specifically excluded as in our evaluation.
Results
The test set contains a total of 21 services, among which 6 services are also present in the training set (seen services), whereas the remaining 15 are not present in the training set (unseen services). Table TABREF11 shows the evaluation metrics for the different submissions obtained on the test set. It also lists the performance of different submissions on seen and unseen services, helping evaluate the effectiveness in zero-shot settings. Team 9 achieved a very high joint goal accuracy of 86.53%, around 9% higher than the second-placed team. We observed the following trends across submissions:
For unseen services, performance on categorical slots is comparable to that on non-categorical slots. On the other hand, for seen services, the performance on categorical slots is better. This could be because there is less signal to differentiate between the different possible values for a categorical slot when they have not been observed in the training set.
The winning team's performance on seen services is similar to that of the other top teams. However, the winning team has a considerable edge on unseen services, outperforming the second team by around 12% in terms of joint goal accuracy. This margin was observed across both categorical and non-categorical slots.
Among unseen services, when looking at services belonging to unseen domains, the winning team was ahead of the other teams by at least 15%. The performance on categorical slots for unseen domains was about the same as that for seen services and domains. For other teams, there was at least a 20% drop in accuracy of categorical slots in unseen domains vs seen domains and services.
The joint goal accuracy of most of the models was worse by 15 percentage points on an average on the test set as compared to the dev set. This could be because the test set contains a much higher proportion of turns with at least one unseen services as compared to the dev set (77% and 45% respectively).
Summary
In this paper, we summarized the Schema-Guided Dialogue State Tracking task conducted at the Eighth Dialogue System Technology Challenge. This task challenged participants to develop dialogue state tracking models for large scale virtual assistants, with particular emphasis on joint modeling across different domains and APIs for data-efficiency and zero-shot generalization to new/unseen APIs. In order to encourage the development of such models, we constructed a new dataset spanning 16 domains (and 4 new domains in dev and test sets), defining multiple APIs with overlapping functionality for each of these domains. We advocated the use of schema-guided approach to building large-scale assistants, facilitating data-efficient joint modeling across domains while reducing maintenance workload.
The Schema-Guided Dialogue dataset released as part of this task is the first to highlight many of the aforementioned challenges. As a result, this task led to the development of several models utilizing the schema-guided approach for dialogue state tracking. The models extensively utilized pre-trained encoders like BERT BIBREF20, XLNet BIBREF31 etc. and employed data augmentation techniques to achieve effective zero-shot generalization to new APIs. The proposed schema-guided approach is fairly general and can be used to develop other dialogue system components such as language understanding, policy and response generation. We plan to explore them in future works.
Summary ::: Acknowledgements
The authors thank Guan-Lin Chao, Amir Fayazi and Maria Wang for their advice and assistance. | Active Intent Accuracy, Requested Slot F1, Average Goal Accuracy, Joint Goal Accuracy |
d47c074012eae27426cd700f841fd8bf490dcc7b | d47c074012eae27426cd700f841fd8bf490dcc7b_0 | Q: What is the baseline model?
Text: Introduction
Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. need to support a large and constantly increasing number of services, over a wide variety of domains. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, Taskmaster-1 BIBREF1, M2M BIBREF2 and FRAMES BIBREF3.
However, building large-scale assistants, as opposed to dialogue systems managing a few APIs, poses a new set of challenges. Apart from the handling a very large variety of domains, such systems need to support heterogeneous services or APIs with possibly overlapping functionality. It should also offer an efficient way of supporting new APIs or services, while requiring little or no additional training data. Furthermore, to reduce maintenance workload and accommodate future growth, such assistants need to be robust to changes in the API's interface or addition of new slot values. Such changes shouldn't require collection of additional training data or retraining the model.
The Schema-Guided Dialogue State Tracking task at the Eighth Dialogue System Technology Challenge explores the aforementioned challenges in context of dialogue state tracking. In a task-oriented dialogue, the dialogue state is a summary of the entire conversation till the current turn. The dialogue state is used to invoke APIs with appropriate parameters as specified by the user over the dialogue history. It is also used by the assistant to generate the next actions to continue the dialogue. DST, therefore, is a core component of virtual assistants.
In this task, participants are required to develop innovative approaches to multi-domain dialogue state tracking, with a focus on data-efficient joint modeling across APIs and zero-shot generalization to new APIs. The task is based on the Schema-Guided Dialogue (SGD) dataset, which, to the best of our knowledge, is the largest publicly available corpus of annotated task-oriented dialogues. With over 16000 dialogues in the training set spanning 26 APIs over 16 domains, it exceeds the existing dialogue corpora in scale. SGD is the first dataset to allow multiple APIs with overlapping functionality within each domain. To adequately test generalization in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.
Related Work
Dialogue systems have constituted an active area of research for the past few decades. The advent of commercial personal assistants has provided further impetus to dialogue systems research. As virtual assistants incorporate diverse domains, zero-shot modeling BIBREF4, BIBREF5, BIBREF6, domain adaptation and transfer learning techniques BIBREF7, BIBREF8, BIBREF9 have been explored to support new domains in a data efficient manner.
Deep learning based approaches to DST have recently gained popularity. Some of these approaches estimate the dialogue state as a distribution over all possible slot-values BIBREF10, BIBREF11 or individually score all slot-value combinations BIBREF12, BIBREF13. Such approaches are, however, hard to scale to real-world virtual assistants, where the set of possible values for certain slots may be very large (date, time or restaurant name) and even dynamic (movie or event name). Other approaches utilizing a dynamic vocabulary of slot values BIBREF14, BIBREF15 still do not allow zero-shot generalization to new services and APIs BIBREF16, since they use schema elements i.e. intents and slots as fixed class labels.
Although such systems are capable of parsing the dialogue semantics in terms of these fixed intent labels, they lack understanding of the semantics of these labels. For instance, for the user utterance “I want to buy tickets for a movie.", such models can predict BuyMovieTickets as the correct intent based on patterns observed in the training data, but don't model either its association with the real world action of buying movie tickets, or its similarity to the action of buying concert or theatre tickets. Furthermore, because of their dependence on a fixed schema, such models are not robust to changes in the schema, and need to be retrained as new slots or intents are added. Use of domain-specific parameters renders some approaches unsuitable for zero-shot application.
Task
The primary task of this challenge is to develop multi-domain models for DST suitable for the scale and complexity of large scale virtual assistants. Supporting a wide variety of APIs or services with possibly overlapping functionality is an important requirement of such assistants. A common approach to do this involves defining a large master schema that lists all intents and slots supported by the assistant. Each service either adopts this master schema for the representation of the underlying data, or provides logic to translate between its own schema and the master schema.
The first approach involving adoption of the master schema is not ideal if a service wishes to integrate with multiple assistants, since each of the assistants could have their own master schema. The second approach involves definition of logic for translation between master schema and the service's schema, which increases the maintenance workload. Furthermore, it is difficult to develop a master schema catering to all possible use cases.
Additionally, while there are many similar concepts across services that can be jointly modeled, for example, the similarities in logic for querying or specifying the number of movie tickets, flight tickets or concert tickets, the master schema approach does not facilitate joint modeling of such concepts, unless an explicit mapping between them is manually defined. To address these limitations, we propose a schema-guided approach, which eliminates the need for a master schema.
Task ::: Schema-Guided Approach
Under the Schema-Guided approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF2 shows an example). The dialogue annotations are guided by the schema of the underlying service or API, as shown in Figure FIGREF3. In this example, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.
The natural language descriptions present in the schema are used to obtain a semantic representation of intents and slots. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. Using a single model facilitates representation and transfer of common knowledge across related concepts in different services. Since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. It is also robust to changes like the addition of new intents or slots to the service. In addition, the participants are allowed to use any external datasets or resources to bootstrap their models.
Dataset
As shown in Table TABREF9, our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.
Dataset ::: Data Representation
The dataset consists of conversations between a virtual assistant and a user. Each conversation can span multiple services across various domains. The dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.
In addition to the dialogues, for each service used in the dataset, a normalized representation of the interface exposed is provided as the schema. The schema contains details like the name of the service, the list of tasks supported by the service (intents) and the attributes of the entities used by the service (slots). The schema also contains natural language descriptions of the service, intents and slots which can be used for developing models which can condition their predictions on the schema.
Dataset ::: Comparison With Other Datasets
To reflect the constraints present in real-world services and APIs, we impose a few constraints on the data. Our dataset does not expose the set of all possible values for certain slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Such slots are specifically identified as non-categorical slots. In our evaluation sets, we ensured the presence of a significant number of values which were not previously seen in the training set to evaluate the performance of models on unseen values. Some slots like gender, number of people, etc. are classified as categorical and we provide a list of all possible values for them. However, these values are assumed to be not consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.
Real-world services can only be invoked with certain slot combinations: e.g. most restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. Although this constraint has no implications on the dialogue state tracking task, it restricts the possible conversational flows. Hence, to prevent flows not supported by actual services, we restrict services to be called with a list of slot combinations. The different service calls supported by a service are listed as intents with each intent specifying a list of required slots. The intent cannot be called without providing values for these required slots. Each intent also contains a list of optional slots with default values which can be overridden by the user.
In our dataset, we also have multiple services per domain with overlapping functionality. The intents across these services are similar but differ in terms of intent names, intent arguments, slot names, etc. In some cases, there is no one to one mapping between slot names (e.g., the num_stops and direct_only slots in Figure FIGREF3). With an ever increasing number of services and service providers, we believe that having multiple similar services per domain is much closer to the situation faced by virtual assistants than having one unique service per domain.
Dataset ::: Data Collection And Dataset Analysis
Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Using a dialogue simulator offers us multiple advantages. First, it ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase, thus creating a much diverse dataset. Second, simulated dialogues do not require manual annotation, as opposed to a Wizard-of-Oz setup BIBREF17, which is a common approach utilized in other datasets BIBREF0. It has been shown that such datasets suffer from substantial annotation errors BIBREF18. Thirdly, using a simulator greatly simplifies the data collection task and instructions as only paraphrasing is needed to achieve a natural dialogue. This is particularly important for creating a large dataset spanning multiple domains.
The 20 domains present across the train, dev and test datasets are listed in Table TABREF10, as are the details regarding which domains are present in each of the datasets. We create synthetic implementations of a total of 45 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are structured representations of dialogue semantics. We then use a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps briefly and then present analyses of the collected dataset.
All the services are implemented using a SQL engine. Since entity attributes are often correlated, we decided not to sample synthetic entities and instead relied on sampling entities from Freebase. The dialogue simulator interacts with the services to generate valid dialogue outlines. The simulator consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. At the start of the conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. The user agent generates dialogue acts to be output and combines them with values retrieved from the service/API to create the user actions. The system agent responds by following a similar procedure but also ensures that the generated flows are valid. We identified over 200 distinct scenarios for the training set each consisting up to 5 intents from various domains. Finally, the dialogue outlines generated are paraphrased into a natural conversation by crowd workers. We ensure that the annotations for the dialogue state and slots generated by the simulator are preserved and hence need no other annotation. We omit details for brevity: please refer to BIBREF19 for more details.
The entire dataset consists of over 16K dialogues spanning multiple domains. Overall statistics of the dataset and comparison with other datasets can be seen in Table TABREF9. Figure FIGREF8 shows the details of the distribution of dialogue lengths across single-domain and multi-domain dialogues. The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on average. Figure FIGREF8 shows the frequency of the different dialogue acts contained in the dataset. The dataset also contains a significant number of unseen domains/APIs in the dev and test sets. 77% of the dialogue turns in the test set and 45% of the turns in dev set contain at least one service not present in the training set. This facilitates the development of models which can generalize to new domains with very few labelled examples.
Submissions
The submissions from 25 teams included a variety of approaches and innovative solutions to specific problems posed by this dataset. For the workshop, we received submissions from 9 of these teams. In this section, we provide a short summary of the approaches followed by these teams. For effective generalization to unseen APIs, most teams used pre-trained encoders to encode schema element descriptions. Unless otherwise mentioned, a pre-trained BERT BIBREF20 encoder was used.
Team 2 BIBREF21: This was the only paper not using a pre-trained encoder, thus providing another important baseline. They rely on separate RNNs to encode service, slot and intent descriptions, and a BiRNN to encode dialogue history. Slot values are inferred using a TRADE-like encoder-decoder setup with a 3-way slot status gate, using the utterance encoding and schema element embeddings as context.
Team 5 BIBREF22: They predict values for categorical slots using a softmax over all candidate values. Non-categorical slot values are predicted by first predicting the status of each slot and then using a BiLSTM-CRF layer for BIO tagging BIBREF23. They also utilize a slot adoption tracker to predict if the values proposed by the system are accepted by the user.
Team 9 BIBREF24: This team submitted the winning entry, beating the second-placed team by around 9% in terms of joint goal accuracy. They use two separate models for categorical and non-categorical slots, and treat numerical categorical slots as non-categorical. They also use the entire dialogue history as input. They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy.
Team 12 BIBREF25: They use auxiliary binary features to connect previous intent to current intent, slots to dialogue history and source slots to target slots for slot transfer. Non-categorical slots are modeled similar to question answering by adding a null token and predicting spans for slot values. In-domain and cross-domain slot transfers are modeled as separate binary decisions by passing the slot descriptions as additional inputs.
Team 16 BIBREF26: They convert the tracking task for both categorical and non-categorical slots into a question answering task by feeding in the schema and the previous turns as the context. Similar to the baseline model, prediction is performed in two stages. The status of each slot (active/inactive/dontcare) is predicted using a classifier, following which the value is predicted as a span in the context. The same network is used for the different prediction tasks but the leading token and separator tokens used are different. They observe large gains by fine-tuning the schema embeddings and increasing the number of past turns fed as context.
Team 23 BIBREF27: They use a large scale multi-task model utilizing a single pass of a BERT based model for all tasks. Embeddings are calculated for the intents and slot value by using dialogue history, service and slot descriptions, possible values for categorical slots and are used for the various predictions.
Anonymous Team A BIBREF28: We could not identify which team submitted this model. They use multi-head attention twice to obtain domain-conditioned and slot-conditioned representations of the dialogue history. These representations are concatenated to obtain the full context which is used for the various predictions.
Anonymous Team B BIBREF29: We could not identify which team submitted this model. They use separate NLU systems for the sub tasks of predicting intents, requested slots, slot status, categorical and non-categorical slot values. They use a rule-based DST system with a few additions resulting in significant improvement. The improvements include adding dropout to intent prediction to account for train-test mismatch, using the entire predicted slot status distribution and separate binary predictions for slot transfer.
Anonymous Team C BIBREF30: They use a two-stage model with a candidate tracker for NLU and a candidate classifier to update the dialogue state. A slot tagger identifies slot values, which are used to update the candidate tracker. The candidate classifier uses the utterances and slot/intent descriptions to predict the final dialogue state. They also use an additional loss to penalize incorrect prediction on which slots appear in the current turn.
Evaluation
We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.
Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.
Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.
Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.
Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly.
In order to better reflect model performance in our task's specific setting, we introduce changes in the definitions of evaluation metrics from prior work. These are listed below:
[leftmargin=*]
Joint goal accuracy calculation: Traditionally, joint goal accuracy has been defined as the accuracy of predicting the dialogue state for all domains correctly. This is not practical in our setup, as the large number of services would result in near zero joint goal accuracy if the traditional definition is used. Furthermore, an incorrect dialogue state prediction for a service in the beginning of a dialogue degrades the joint goal accuracy for all future turns, even if the predictions for all other services are correct. Hence, joint goal accuracy calculated this way may not provide as much insight into the performance on different services. To address these concerns, only the services which are active or pertinent in a turn are included in the dialogue state. Thus, a service ceases to be a part of the dialogue state once its intent has been fulfilled.
Fuzzy matching for non-categorical slot values: The presence of non-categorical slots is another distinguishing feature of our dataset. These slots don't have a predefined vocabulary, and their values are predicted as a substring or span of the past user or system utterances. Drawing inspiration from the metrics used for slot tagging in spoken language understanding, we use a fuzzy matching score for non-categorical slots to reward partial matches with the ground truth.
Average goal accuracy: To calculate average goal accuracy, we do not take into account instances when both the ground truth and the predicted values for a slot are empty. Since for a given slot, a large number of utterances have an empty assignment, models can achieve a relatively high average goal accuracy just by predicting an empty assignment for each slot unless specifically excluded as in our evaluation.
Results
The test set contains a total of 21 services, among which 6 services are also present in the training set (seen services), whereas the remaining 15 are not present in the training set (unseen services). Table TABREF11 shows the evaluation metrics for the different submissions obtained on the test set. It also lists the performance of different submissions on seen and unseen services, helping evaluate the effectiveness in zero-shot settings. Team 9 achieved a very high joint goal accuracy of 86.53%, around 9% higher than the second-placed team. We observed the following trends across submissions:
For unseen services, performance on categorical slots is comparable to that on non-categorical slots. On the other hand, for seen services, the performance on categorical slots is better. This could be because there is less signal to differentiate between the different possible values for a categorical slot when they have not been observed in the training set.
The winning team's performance on seen services is similar to that of the other top teams. However, the winning team has a considerable edge on unseen services, outperforming the second team by around 12% in terms of joint goal accuracy. This margin was observed across both categorical and non-categorical slots.
Among unseen services, when looking at services belonging to unseen domains, the winning team was ahead of the other teams by at least 15%. The performance on categorical slots for unseen domains was about the same as that for seen services and domains. For other teams, there was at least a 20% drop in accuracy of categorical slots in unseen domains vs seen domains and services.
The joint goal accuracy of most of the models was worse by 15 percentage points on an average on the test set as compared to the dev set. This could be because the test set contains a much higher proportion of turns with at least one unseen services as compared to the dev set (77% and 45% respectively).
Summary
In this paper, we summarized the Schema-Guided Dialogue State Tracking task conducted at the Eighth Dialogue System Technology Challenge. This task challenged participants to develop dialogue state tracking models for large scale virtual assistants, with particular emphasis on joint modeling across different domains and APIs for data-efficiency and zero-shot generalization to new/unseen APIs. In order to encourage the development of such models, we constructed a new dataset spanning 16 domains (and 4 new domains in dev and test sets), defining multiple APIs with overlapping functionality for each of these domains. We advocated the use of schema-guided approach to building large-scale assistants, facilitating data-efficient joint modeling across domains while reducing maintenance workload.
The Schema-Guided Dialogue dataset released as part of this task is the first to highlight many of the aforementioned challenges. As a result, this task led to the development of several models utilizing the schema-guided approach for dialogue state tracking. The models extensively utilized pre-trained encoders like BERT BIBREF20, XLNet BIBREF31 etc. and employed data augmentation techniques to achieve effective zero-shot generalization to new APIs. The proposed schema-guided approach is fairly general and can be used to develop other dialogue system components such as language understanding, policy and response generation. We plan to explore them in future works.
Summary ::: Acknowledgements
The authors thank Guan-Lin Chao, Amir Fayazi and Maria Wang for their advice and assistance. | Unanswerable |
b43fa27270eeba3e80ff2a03754628b5459875d6 | b43fa27270eeba3e80ff2a03754628b5459875d6_0 | Q: What domains are present in the data?
Text: Introduction
Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. need to support a large and constantly increasing number of services, over a wide variety of domains. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, Taskmaster-1 BIBREF1, M2M BIBREF2 and FRAMES BIBREF3.
However, building large-scale assistants, as opposed to dialogue systems managing a few APIs, poses a new set of challenges. Apart from the handling a very large variety of domains, such systems need to support heterogeneous services or APIs with possibly overlapping functionality. It should also offer an efficient way of supporting new APIs or services, while requiring little or no additional training data. Furthermore, to reduce maintenance workload and accommodate future growth, such assistants need to be robust to changes in the API's interface or addition of new slot values. Such changes shouldn't require collection of additional training data or retraining the model.
The Schema-Guided Dialogue State Tracking task at the Eighth Dialogue System Technology Challenge explores the aforementioned challenges in context of dialogue state tracking. In a task-oriented dialogue, the dialogue state is a summary of the entire conversation till the current turn. The dialogue state is used to invoke APIs with appropriate parameters as specified by the user over the dialogue history. It is also used by the assistant to generate the next actions to continue the dialogue. DST, therefore, is a core component of virtual assistants.
In this task, participants are required to develop innovative approaches to multi-domain dialogue state tracking, with a focus on data-efficient joint modeling across APIs and zero-shot generalization to new APIs. The task is based on the Schema-Guided Dialogue (SGD) dataset, which, to the best of our knowledge, is the largest publicly available corpus of annotated task-oriented dialogues. With over 16000 dialogues in the training set spanning 26 APIs over 16 domains, it exceeds the existing dialogue corpora in scale. SGD is the first dataset to allow multiple APIs with overlapping functionality within each domain. To adequately test generalization in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.
Related Work
Dialogue systems have constituted an active area of research for the past few decades. The advent of commercial personal assistants has provided further impetus to dialogue systems research. As virtual assistants incorporate diverse domains, zero-shot modeling BIBREF4, BIBREF5, BIBREF6, domain adaptation and transfer learning techniques BIBREF7, BIBREF8, BIBREF9 have been explored to support new domains in a data efficient manner.
Deep learning based approaches to DST have recently gained popularity. Some of these approaches estimate the dialogue state as a distribution over all possible slot-values BIBREF10, BIBREF11 or individually score all slot-value combinations BIBREF12, BIBREF13. Such approaches are, however, hard to scale to real-world virtual assistants, where the set of possible values for certain slots may be very large (date, time or restaurant name) and even dynamic (movie or event name). Other approaches utilizing a dynamic vocabulary of slot values BIBREF14, BIBREF15 still do not allow zero-shot generalization to new services and APIs BIBREF16, since they use schema elements i.e. intents and slots as fixed class labels.
Although such systems are capable of parsing the dialogue semantics in terms of these fixed intent labels, they lack understanding of the semantics of these labels. For instance, for the user utterance “I want to buy tickets for a movie.", such models can predict BuyMovieTickets as the correct intent based on patterns observed in the training data, but don't model either its association with the real world action of buying movie tickets, or its similarity to the action of buying concert or theatre tickets. Furthermore, because of their dependence on a fixed schema, such models are not robust to changes in the schema, and need to be retrained as new slots or intents are added. Use of domain-specific parameters renders some approaches unsuitable for zero-shot application.
Task
The primary task of this challenge is to develop multi-domain models for DST suitable for the scale and complexity of large scale virtual assistants. Supporting a wide variety of APIs or services with possibly overlapping functionality is an important requirement of such assistants. A common approach to do this involves defining a large master schema that lists all intents and slots supported by the assistant. Each service either adopts this master schema for the representation of the underlying data, or provides logic to translate between its own schema and the master schema.
The first approach involving adoption of the master schema is not ideal if a service wishes to integrate with multiple assistants, since each of the assistants could have their own master schema. The second approach involves definition of logic for translation between master schema and the service's schema, which increases the maintenance workload. Furthermore, it is difficult to develop a master schema catering to all possible use cases.
Additionally, while there are many similar concepts across services that can be jointly modeled, for example, the similarities in logic for querying or specifying the number of movie tickets, flight tickets or concert tickets, the master schema approach does not facilitate joint modeling of such concepts, unless an explicit mapping between them is manually defined. To address these limitations, we propose a schema-guided approach, which eliminates the need for a master schema.
Task ::: Schema-Guided Approach
Under the Schema-Guided approach, each service provides a schema listing the supported slots and intents along with their natural language descriptions (Figure FIGREF2 shows an example). The dialogue annotations are guided by the schema of the underlying service or API, as shown in Figure FIGREF3. In this example, the departure and arrival cities are captured by analogously functioning but differently named slots in both schemas. Furthermore, values for the number_stops and direct_only slots highlight idiosyncrasies between services interpreting the same concept.
The natural language descriptions present in the schema are used to obtain a semantic representation of intents and slots. The assistant employs a single unified model containing no domain or service specific parameters to make predictions conditioned on these schema elements. Using a single model facilitates representation and transfer of common knowledge across related concepts in different services. Since the model utilizes semantic representation of schema elements as input, it can interface with unseen services or APIs on which it has not been trained. It is also robust to changes like the addition of new intents or slots to the service. In addition, the participants are allowed to use any external datasets or resources to bootstrap their models.
Dataset
As shown in Table TABREF9, our Schema-Guided Dialogue (SGD) dataset exceeds other datasets in most of the metrics at scale. The especially larger number of domains, slots, and slot values, and the presence of multiple services per domain, are representative of these scale-related challenges. Furthermore, our evaluation sets contain many services, and consequently slots, which are not present in the training set, to help evaluate model performance on unseen services.
Dataset ::: Data Representation
The dataset consists of conversations between a virtual assistant and a user. Each conversation can span multiple services across various domains. The dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.
In addition to the dialogues, for each service used in the dataset, a normalized representation of the interface exposed is provided as the schema. The schema contains details like the name of the service, the list of tasks supported by the service (intents) and the attributes of the entities used by the service (slots). The schema also contains natural language descriptions of the service, intents and slots which can be used for developing models which can condition their predictions on the schema.
Dataset ::: Comparison With Other Datasets
To reflect the constraints present in real-world services and APIs, we impose a few constraints on the data. Our dataset does not expose the set of all possible values for certain slots. Having such a list is impractical for slots like date or time because they have infinitely many possible values or for slots like movie or song names, for which new values are periodically added. Such slots are specifically identified as non-categorical slots. In our evaluation sets, we ensured the presence of a significant number of values which were not previously seen in the training set to evaluate the performance of models on unseen values. Some slots like gender, number of people, etc. are classified as categorical and we provide a list of all possible values for them. However, these values are assumed to be not consistent across services. E.g., different services may use (`male', `female'), (`M', `F') or (`he', `she') as possible values for gender slot.
Real-world services can only be invoked with certain slot combinations: e.g. most restaurant reservation APIs do not let the user search for restaurants by date without specifying a location. Although this constraint has no implications on the dialogue state tracking task, it restricts the possible conversational flows. Hence, to prevent flows not supported by actual services, we restrict services to be called with a list of slot combinations. The different service calls supported by a service are listed as intents with each intent specifying a list of required slots. The intent cannot be called without providing values for these required slots. Each intent also contains a list of optional slots with default values which can be overridden by the user.
In our dataset, we also have multiple services per domain with overlapping functionality. The intents across these services are similar but differ in terms of intent names, intent arguments, slot names, etc. In some cases, there is no one to one mapping between slot names (e.g., the num_stops and direct_only slots in Figure FIGREF3). With an ever increasing number of services and service providers, we believe that having multiple similar services per domain is much closer to the situation faced by virtual assistants than having one unique service per domain.
Dataset ::: Data Collection And Dataset Analysis
Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Using a dialogue simulator offers us multiple advantages. First, it ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase, thus creating a much diverse dataset. Second, simulated dialogues do not require manual annotation, as opposed to a Wizard-of-Oz setup BIBREF17, which is a common approach utilized in other datasets BIBREF0. It has been shown that such datasets suffer from substantial annotation errors BIBREF18. Thirdly, using a simulator greatly simplifies the data collection task and instructions as only paraphrasing is needed to achieve a natural dialogue. This is particularly important for creating a large dataset spanning multiple domains.
The 20 domains present across the train, dev and test datasets are listed in Table TABREF10, as are the details regarding which domains are present in each of the datasets. We create synthetic implementations of a total of 45 services or APIs over these domains. Our simulator framework interacts with these services to generate dialogue outlines, which are structured representations of dialogue semantics. We then use a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. Our novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection. In this section, we describe these steps briefly and then present analyses of the collected dataset.
All the services are implemented using a SQL engine. Since entity attributes are often correlated, we decided not to sample synthetic entities and instead relied on sampling entities from Freebase. The dialogue simulator interacts with the services to generate valid dialogue outlines. The simulator consists of two agents playing the roles of the user and the system. Both agents interact with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. At the start of the conversation, the user agent is seeded with a scenario, which is a sequence of intents to be fulfilled. The user agent generates dialogue acts to be output and combines them with values retrieved from the service/API to create the user actions. The system agent responds by following a similar procedure but also ensures that the generated flows are valid. We identified over 200 distinct scenarios for the training set each consisting up to 5 intents from various domains. Finally, the dialogue outlines generated are paraphrased into a natural conversation by crowd workers. We ensure that the annotations for the dialogue state and slots generated by the simulator are preserved and hence need no other annotation. We omit details for brevity: please refer to BIBREF19 for more details.
The entire dataset consists of over 16K dialogues spanning multiple domains. Overall statistics of the dataset and comparison with other datasets can be seen in Table TABREF9. Figure FIGREF8 shows the details of the distribution of dialogue lengths across single-domain and multi-domain dialogues. The single-domain dialogues in our dataset contain an average of 15.3 turns, whereas the multi-domain ones contain 23 turns on average. Figure FIGREF8 shows the frequency of the different dialogue acts contained in the dataset. The dataset also contains a significant number of unseen domains/APIs in the dev and test sets. 77% of the dialogue turns in the test set and 45% of the turns in dev set contain at least one service not present in the training set. This facilitates the development of models which can generalize to new domains with very few labelled examples.
Submissions
The submissions from 25 teams included a variety of approaches and innovative solutions to specific problems posed by this dataset. For the workshop, we received submissions from 9 of these teams. In this section, we provide a short summary of the approaches followed by these teams. For effective generalization to unseen APIs, most teams used pre-trained encoders to encode schema element descriptions. Unless otherwise mentioned, a pre-trained BERT BIBREF20 encoder was used.
Team 2 BIBREF21: This was the only paper not using a pre-trained encoder, thus providing another important baseline. They rely on separate RNNs to encode service, slot and intent descriptions, and a BiRNN to encode dialogue history. Slot values are inferred using a TRADE-like encoder-decoder setup with a 3-way slot status gate, using the utterance encoding and schema element embeddings as context.
Team 5 BIBREF22: They predict values for categorical slots using a softmax over all candidate values. Non-categorical slot values are predicted by first predicting the status of each slot and then using a BiLSTM-CRF layer for BIO tagging BIBREF23. They also utilize a slot adoption tracker to predict if the values proposed by the system are accepted by the user.
Team 9 BIBREF24: This team submitted the winning entry, beating the second-placed team by around 9% in terms of joint goal accuracy. They use two separate models for categorical and non-categorical slots, and treat numerical categorical slots as non-categorical. They also use the entire dialogue history as input. They perform data augmentation by back translation between English and Chinese, which seems to be one of the distinguishing factors resulting in a much higher accuracy.
Team 12 BIBREF25: They use auxiliary binary features to connect previous intent to current intent, slots to dialogue history and source slots to target slots for slot transfer. Non-categorical slots are modeled similar to question answering by adding a null token and predicting spans for slot values. In-domain and cross-domain slot transfers are modeled as separate binary decisions by passing the slot descriptions as additional inputs.
Team 16 BIBREF26: They convert the tracking task for both categorical and non-categorical slots into a question answering task by feeding in the schema and the previous turns as the context. Similar to the baseline model, prediction is performed in two stages. The status of each slot (active/inactive/dontcare) is predicted using a classifier, following which the value is predicted as a span in the context. The same network is used for the different prediction tasks but the leading token and separator tokens used are different. They observe large gains by fine-tuning the schema embeddings and increasing the number of past turns fed as context.
Team 23 BIBREF27: They use a large scale multi-task model utilizing a single pass of a BERT based model for all tasks. Embeddings are calculated for the intents and slot value by using dialogue history, service and slot descriptions, possible values for categorical slots and are used for the various predictions.
Anonymous Team A BIBREF28: We could not identify which team submitted this model. They use multi-head attention twice to obtain domain-conditioned and slot-conditioned representations of the dialogue history. These representations are concatenated to obtain the full context which is used for the various predictions.
Anonymous Team B BIBREF29: We could not identify which team submitted this model. They use separate NLU systems for the sub tasks of predicting intents, requested slots, slot status, categorical and non-categorical slot values. They use a rule-based DST system with a few additions resulting in significant improvement. The improvements include adding dropout to intent prediction to account for train-test mismatch, using the entire predicted slot status distribution and separate binary predictions for slot transfer.
Anonymous Team C BIBREF30: They use a two-stage model with a candidate tracker for NLU and a candidate classifier to update the dialogue state. A slot tagger identifies slot values, which are used to update the candidate tracker. The candidate classifier uses the utterances and slot/intent descriptions to predict the final dialogue state. They also use an additional loss to penalize incorrect prediction on which slots appear in the current turn.
Evaluation
We consider the following metrics for automatic evaluation of different submissions. Joint goal accuracy has been used as the primary metric to rank the submissions.
Active Intent Accuracy: The fraction of user turns for which the active intent has been correctly predicted.
Requested Slot F1: The macro-averaged F1 score for requested slots over all eligible turns. Turns with no requested slots in ground truth and predictions are skipped.
Average Goal Accuracy: For each turn, we predict a single value for each slot present in the dialogue state. This is the average accuracy of predicting the value of a slot correctly.
Joint Goal Accuracy: This is the average accuracy of predicting all slot assignments for a given service in a turn correctly.
In order to better reflect model performance in our task's specific setting, we introduce changes in the definitions of evaluation metrics from prior work. These are listed below:
[leftmargin=*]
Joint goal accuracy calculation: Traditionally, joint goal accuracy has been defined as the accuracy of predicting the dialogue state for all domains correctly. This is not practical in our setup, as the large number of services would result in near zero joint goal accuracy if the traditional definition is used. Furthermore, an incorrect dialogue state prediction for a service in the beginning of a dialogue degrades the joint goal accuracy for all future turns, even if the predictions for all other services are correct. Hence, joint goal accuracy calculated this way may not provide as much insight into the performance on different services. To address these concerns, only the services which are active or pertinent in a turn are included in the dialogue state. Thus, a service ceases to be a part of the dialogue state once its intent has been fulfilled.
Fuzzy matching for non-categorical slot values: The presence of non-categorical slots is another distinguishing feature of our dataset. These slots don't have a predefined vocabulary, and their values are predicted as a substring or span of the past user or system utterances. Drawing inspiration from the metrics used for slot tagging in spoken language understanding, we use a fuzzy matching score for non-categorical slots to reward partial matches with the ground truth.
Average goal accuracy: To calculate average goal accuracy, we do not take into account instances when both the ground truth and the predicted values for a slot are empty. Since for a given slot, a large number of utterances have an empty assignment, models can achieve a relatively high average goal accuracy just by predicting an empty assignment for each slot unless specifically excluded as in our evaluation.
Results
The test set contains a total of 21 services, among which 6 services are also present in the training set (seen services), whereas the remaining 15 are not present in the training set (unseen services). Table TABREF11 shows the evaluation metrics for the different submissions obtained on the test set. It also lists the performance of different submissions on seen and unseen services, helping evaluate the effectiveness in zero-shot settings. Team 9 achieved a very high joint goal accuracy of 86.53%, around 9% higher than the second-placed team. We observed the following trends across submissions:
For unseen services, performance on categorical slots is comparable to that on non-categorical slots. On the other hand, for seen services, the performance on categorical slots is better. This could be because there is less signal to differentiate between the different possible values for a categorical slot when they have not been observed in the training set.
The winning team's performance on seen services is similar to that of the other top teams. However, the winning team has a considerable edge on unseen services, outperforming the second team by around 12% in terms of joint goal accuracy. This margin was observed across both categorical and non-categorical slots.
Among unseen services, when looking at services belonging to unseen domains, the winning team was ahead of the other teams by at least 15%. The performance on categorical slots for unseen domains was about the same as that for seen services and domains. For other teams, there was at least a 20% drop in accuracy of categorical slots in unseen domains vs seen domains and services.
The joint goal accuracy of most of the models was worse by 15 percentage points on an average on the test set as compared to the dev set. This could be because the test set contains a much higher proportion of turns with at least one unseen services as compared to the dev set (77% and 45% respectively).
Summary
In this paper, we summarized the Schema-Guided Dialogue State Tracking task conducted at the Eighth Dialogue System Technology Challenge. This task challenged participants to develop dialogue state tracking models for large scale virtual assistants, with particular emphasis on joint modeling across different domains and APIs for data-efficiency and zero-shot generalization to new/unseen APIs. In order to encourage the development of such models, we constructed a new dataset spanning 16 domains (and 4 new domains in dev and test sets), defining multiple APIs with overlapping functionality for each of these domains. We advocated the use of schema-guided approach to building large-scale assistants, facilitating data-efficient joint modeling across domains while reducing maintenance workload.
The Schema-Guided Dialogue dataset released as part of this task is the first to highlight many of the aforementioned challenges. As a result, this task led to the development of several models utilizing the schema-guided approach for dialogue state tracking. The models extensively utilized pre-trained encoders like BERT BIBREF20, XLNet BIBREF31 etc. and employed data augmentation techniques to achieve effective zero-shot generalization to new APIs. The proposed schema-guided approach is fairly general and can be used to develop other dialogue system components such as language understanding, policy and response generation. We plan to explore them in future works.
Summary ::: Acknowledgements
The authors thank Guan-Lin Chao, Amir Fayazi and Maria Wang for their advice and assistance. | Alarm, Banks, Buses, Calendar, Events, Flights, Homes, Hotels, Media, Messaging, Movies, Music, Payment, Rental Cars, Restaurants, Ride Sharing, Services, Train, Travel, Weather |
458dbf217218fcab9153e33045aac08a2c8a38c6 | 458dbf217218fcab9153e33045aac08a2c8a38c6_0 | Q: How many texts/datapoints are in the SemEval, TASS and SENTIPOLC datasets?
Text: Introduction
Sentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between English state-of-the-art methods and other languages. It is expected that some researchers decided to test the straightforward approach which consists in, first, translating the messages to English, and, then, use a high performing English sentiment classifier (for instance, see BIBREF0 and BIBREF1 ) instead of creating a sentiment classifier optimized for a given language. However, the advantages of a properly tuned sentiment classifier have been studied for different languages (for instance, see BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ).
This manuscript focuses on the particular case of multilingual sentiment analysis of short informal texts such as Twitter messages. Our aim is to provide an easy-to-use tool to create sentiment classifiers based on supervised learning (i.e., labeled dataset) where the classifier should be competitive to those sentiment classifiers carefully tuned by some given languages. Furthermore, our second contribution is to create a well-performing baseline to compare new sentiment classifiers in a broad range of languages or to bootstrap new sentiment analysis systems. Our approach is based on selecting the best text-transforming techniques that optimize some performance measures where the chosen techniques are robust to typical writing errors.
In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques.
The rest of the manuscript is organized as follows. Section SECREF2 describes our proposed Sentiment Analysis method. Section SECREF3 describes the datasets and contests used to test our approach; whereas, the experimental results, and, the discussion are presented on Section SECREF4 . Finally, Section SECREF5 concludes.
Our Approach: Multilingual Polarity Classification
We propose a method for multilingual polarity classification that can serve as a baseline as well as a framework to build more complex sentiment analysis systems due to its simplicity and availability as an open source software. As we mentioned, this baseline algorithm for multilingual Sentiment Analysis (B4MSA) was designed with the purpose of being multilingual and easy to implement. B4MSA is not a naïve baseline which is experimentally proved by evaluating it on several international competitions.
In a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier. B4MSA uses a number of text transformations that are categorized in cross-language features (see Subsection SECREF3 ) and language dependent features (see Subsection SECREF9 ). It is important to note that, all the text-transformations considered are either simple to implement or there is a well-known library (e.g. BIBREF6 , BIBREF7 ) to use them. It is important to note that to maintain the cross-language property, we limit ourselves to not use additional knowledge, this include knowledge from affective lexicons or models based on distributional semantics.
To obtain the best performance, one needs to select those text-transformations that work best for a particular dataset, therefore, B4MSA uses a simple random search and hill-climbing (see Subsection SECREF14 ) in space of text-transformations to free the user from this delicate and time-consuming task. Before going into the details of each text-transformation, Table TABREF2 gives a summary of the text-transformations used as well as their parameters associated.
Cross-language Features
We defined cross-language features as a set of features that could be applied in most similar languages, not only related language families such as Germanic languages (English, German, etc.), Romance languages (Spanish, Italian, etc.), among others; but also similar surface features such as punctuation, diacritics, symbol duplication, case sensitivity, etc. Later, the combination of these features will be explored to find the best configuration for a given classifier.
Generally, Twitter messages are full of slang, misspelling, typographical and grammatical errors among others; in order to tackle these aspects we consider different parameters to study this effect. The following points are the parameters to be considered as spelling features. Punctuation (del-punc) considers the use of symbols such as question mark, period, exclamation point, commas, among other spelling marks. Diacritic symbols (del-diac) are commonly used in languages such as Spanish, Italian, Russian, etc., and its wrong usage is one of the main sources of orthographic errors in informal texts; this parameter considers the use or absence of diacritical marks. Symbol reduction (del-d1), usually, twitter messages use repeated characters to emphasize parts of the word to attract user's attention. This aspect makes the vocabulary explodes. We applied the strategy of replacing the repeated symbols by one occurrence of the symbol. Case sensitivity (lc) considers letters to be normalized in lowercase or to keep the original source; the aim is to cut the words that are the same in uppercase and lowercase.
We classified around 500 most popular emoticons, included text emoticons, and the whole set of unicode emoticons (around INLINEFORM0 ) defined by BIBREF8 into three classes: positive, negative and neutral, which are grouped under its corresponding polarity word defined by the class name.
Table TABREF6 shows an excerpt of the dictionary that maps emoticons to their corresponding polarity class.
N-words (word sequences) are widely used in many NLP tasks, and they have also been used in Sentiment Analysis BIBREF9 and BIBREF10 . To compute the N-words, the text is tokenized and N-words are calculated from tokens. For example, let INLINEFORM0 be the text, so its 1-words (unigrams) are each word alone, and its 2-words (bigrams) set are the sequences of two words, the set ( INLINEFORM1 ), and so on. INLINEFORM2 = {the lights, lights and, and shadows, shadows of, of your, your future}, so, given text of size INLINEFORM3 words, we obtain a set containing at most INLINEFORM4 elements. Generally, N-words are used up to 2 or 3-words because it is uncommon to find, between texts, good matches of word sequences greater than three or four words BIBREF11 .
In addition to the traditional N-words representation, we represent the resulting text as q-grams. A q-grams is an agnostic language transformation that consists in representing a document by all its substring of length INLINEFORM0 . For example, let INLINEFORM1 be the text, its 3-grams set are INLINEFORM2
so, given text of size INLINEFORM0 characters, we obtain a set with at most INLINEFORM1 elements. Notice that this transformation handles white-spaces as part of the text. Since there will be q-grams connecting words, in some sense, applying q-grams to the entire text can capture part of the syntactic and contextual information in the sentence. The rationale of q-grams is also to tackle misspelled sentences from the approximate pattern matching perspective BIBREF12 .
Language Dependent Features
The following features are language dependent because they use specific information from the language concerned. Usually, the use of stopwords, stemming and negations are traditionally used in Sentiment Analysis. The users of this approach could add other features such as part of speech, affective lexicons, etc. to improve the performance BIBREF13 .
In many languages, there is a set of extremely common words such as determiners or conjunctions ( INLINEFORM0 or INLINEFORM1 ) which help to build sentences but do not carry any meaning for themselves. These words are known as Stopwords, and they are removed from text before any attempt to classify them. Generally, a stopword list is built using the most frequent terms from a huge document collection. We used the Spanish, English and Italian stopword lists included in the NLTK Python package BIBREF6 in order to identify them.
Stemming is a well-known heuristic process in Information Retrieval field that chops off the end of words and often includes the removal of derivational affixes. This technique uses the morphology of the language coded in a set of rules that are applied to find out word stems and reduce the vocabulary collapsing derivationally related words. In our study, we use the Snowball Stemmer for Spanish and Italian, and the Porter Stemmer for English that are implemented in NLTK package BIBREF6 .
Negation markers might change the polarity of the message. Thus, we attached the negation clue to the nearest word, similar to the approaches used in BIBREF9 . A set of rules was designed for common negation structures that involve negation markers for Spanish, English and Italian. For instance, negation markers used for Spanish are no (not), nunca, jamás (never), and sin (without). The rules (regular expressions) are processed in order, and their purpose is to negate the nearest word to the negation marker using only the information on the text, e.g., avoiding mainly pronouns and articles. For example, in the sentence El coche no es bonito (The car is not nice), the negation marker no and not (for English) is attached to its adjective no_bonito (not_nice).
Text Representation
After text-transformations, it is needed to represent the text in suitable form in order to use a traditional classifier such as SVM. It was decided to select the well known vector representation of a text given its simplicity and powerful representation. Particularly, it is used the Term Frequency-Inverse Document Frequency which is a well-known weighting scheme in NLP. TF-IDF computes a weight that represents the importance of words or terms inside a document in a collection of documents, i.e., how frequently they appear across multiple documents. Therefore, common words such as the and in, which appear in many documents, will have a low score, and words that appear frequently in a single document will have high score. This weighting scheme selects the terms that represent a document.
Parameter Optimization
The model selection, sometimes called hyper-parameter optimization, is essential to ensure the performance of a sentiment classifier. In particular, our approach is highly parametric; in fact, we use such property to adapt to several languages. Table TABREF2 summarizes the parameters and their valid values. The search space contains more than 331 thousand configurations when limited to multilingual and language independent parameters; while the search space reaches close to 4 million configurations when we add our three language-dependent parameters. Depending on the size of the training set, each configuration needs several minutes on a commodity server to be evaluated; thus, an exhaustive exploration of the parameter space can be quite expensive making the approach useless in practice. To tackle the efficiency problems, we perform the model selection using two hyper-parameter optimization algorithms.
The first corresponds to Random Search, described in depth in BIBREF14 . Random search consists on randomly sampling the parameter space and select the best configuration among the sample. The second algorithm consists on a Hill Climbing BIBREF15 , BIBREF16 implemented with a memory to avoid testing a configuration twice. The main idea behind hill climbing H+M is to take a pivoting configuration, explore the configuration's neighborhood, and greedily moves to the best neighbor. The process is repeated until no improvement is possible. The configuration neighborhood is defined as the set of configurations such that these differ in just one parameter's value. This rule is strengthened for tokenizer (see Table TABREF2 ) to differ in a single internal value not in the whole parameter value. More precisely, let INLINEFORM0 be a valid value for tokenizer and INLINEFORM1 the set of valid values for neighborhoods of INLINEFORM2 , then INLINEFORM3 and INLINEFORM4 for any INLINEFORM5 .
To guarantee a better or equal performance than random search, the H+M process starts with the best configuration found in the random search. By using H+M, sample size can be set to 32 or 64, as rule of thumb, and even reach improvements in most cases (see § SECREF4 ). Nonetheless, this simplification and performance boosting comes along with possible higher optimization times. Finally, the performance of each configuration is obtained using a cross-validation technique on the training data, and the metrics are usually used in classification such as: accuracy, score INLINEFORM0 , and recall, among others.
Datasets and contests
Nowadays, there are several international competitions related to text mining, which include diverse tasks such as: polarity classification (at different levels), subjectivity classification, entity detection, and iron detection, among others. These competitions are relevant to measure the potential of different proposed techniques. In this case, we focused on polarity classification task, hence, we developed a baseline method with an acceptable performance achieved in three different contests, namely, TASS'15 (Spanish) BIBREF17 , SemEval'15-16 (English) BIBREF18 , BIBREF19 , and SENTIPOLC'14 (Italian) BIBREF20 . In addition, our approach was tested with other languages (Arabic, German, Portuguese, Russian, and Swedish) to show that is feasible to use our framework as basis for building more complex sentiment analysis systems. From these languages, datasets and results can be seen in BIBREF21 , BIBREF3 and BIBREF2 .
Table TABREF15 presents the details of each of the competitions considered as well as the other languages tested. It can be observed, from the table, the number of examples as well as the number of instances for each polarity level, namely, positive, neutral, negative and none. The training and development (only in SemEval) sets are used to train the sentiment classifier, and the gold set is used to test the classifier. In the case there dataset was not split in training and gold (Arabic, German, Portuguese, Russian, and Swedish) then a cross-validation (10 folds) technique is used to test the classifier. The performance of the classifier is presented using different metrics depending the competition. SemEval uses the average of score INLINEFORM0 of positive and negative labels, TASS uses the accuracy and SENTIPOLC uses a custom metric (see BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 ).
Experimental Results
We tested our framework on two kinds of datasets. On one hand, we compare our performance on three languages having well known sentiment analysis contests; here, we compare our work against competitors of those challenges. On the other hand, we selected five languages without popular opinion mining contests; for these languages, we compare our approach against research works reporting the used corpus.
Performance on sentiment analysis contests
Figure FIGREF17 shows the performance on four contests, corresponding to three different languages. The performance corresponds to the multilingual set of features, i.e., we do not used language-dependent techniques.
Figures UID18 - UID21 illustrates the results on each challenge, all competitors are ordered in score's descending order (higher is better). The achieved performance of our approach is marked with a horizontal line on each figure. Figure UID22 briefly describes each challenge and summarizes our performance on each contest; also, we added three standard measures to simplify the insight's creation of the reader.
The winner method in SENTIPOLC'14 (Italian) is reported in BIBREF22 . This method uses three groups of features: keyword and micro-blogging characteristics, Sentiment Lexicons, SentiWordNet and MultiWordNet, and Distributional Semantic Model (DSM) with a SVM classifier. In contrast with our method, in BIBREF22 three external sentiment lexicons dictionaries were employed; that is, external information.
In TASS'15 (Spanish) competition, the winner reported method was BIBREF23 , which proposed an adaptation based on a tokenizer of tweets Tweetmotif BIBREF24 , Freeling BIBREF25 as lemmatizer, entity detector, morphosyntactic labeler and a translation of the Afinn dictionary. In contrast with our method, BIBREF23 employs several complex and expensive tools. In this task we reached the fourteenth position with an accuracy of INLINEFORM0 . Figure UID19 shows the B4MSA performance to be over two thirds of the competitors.
The remaining two contests correspond to the SemEval'15-16. The B4MSA performance in SemEval is depicted in Figures UID20 and UID21 ; here, B4MSA does not perform as well as in other challenges, mainly because, contrary to other challenges, SemEval rules promotes the enrichment of the official training set. To be consistent with the rest of the experiments, B4MSA uses only the official training set. The results can be significantly improved using larger training datasets; for example, joining SemEval'13 and SemEval'16 training sets, we can reach INLINEFORM0 for SemEval'16, which improves the B4MSA's performance (see Table FIGREF17 ).
In SemEval'15, the winner method is BIBREF26 , which combines three approaches among the participants of SemEval'13, teams: NRC-Canada, GU-MLT-LT and KLUE, and from SemEval'14 the participant TeamX all of them employing external information. In SemEval'16, the winner method was BIBREF27 is composed with an ensemble of two subsystems based on convolutional neural networks, the first subsystem is created using 290 million tweets, and the second one is feeded with 150 million tweets. All these tweets were selected from a very large unlabeled dataset through distant supervision techniques.
Table TABREF23 shows the multilingual set of techniques and the set with language-dependent techniques; for each, we optimized the set of parameters through Random Search and INLINEFORM0 (see Subsection SECREF14 ). The reached performance is reported using both cross-validation and the official gold-standard. Please notice how INLINEFORM1 consistently reaches better performances, even on small sampling sizes. The sampling size is indicated with subscripts in Table TABREF23 . Note that, in SemEval challenges, the cross-validation performances are higher than those reached by evaluating the gold-standard, mainly because the gold-standard does not follow the distribution of training set. This can be understood because the rules of SemEval promote the use of external knowledge.
Table TABREF24 compares our performance on five different languages; we do not apply language-dependent techniques. For each comparison, we took a labeled corpus from BIBREF3 (Arabic) and BIBREF21 (the remaining languages). According to author's reports, all tweets were manually labeled by native speakers as pos, neg, or neu. The Arabic dataset contains INLINEFORM0 items; the other datasets contain from 58 thousand tweets to more than 157 thousand tweets. We were able to fetch a fraction of the original datasets; so, we drop the necessary items to hold the original class-population ratio. The ratio of tweets in our training dataset, respect to the original dataset, is indicated beside the name. As before, we evaluate our algorithms through a 10-fold cross validation.
In BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases.
Conclusions
We presented a simple to implement multilingual framework for polarity classification whose main contributions are in two aspects. On one hand, our approach can serve as a baseline to compare other classification systems. It considers techniques for text representation such as spelling features, emoticons, word-based n-grams, character-based q-grams and language dependent features. On the other hand, our approach is a framework for practitioners or researchers looking for a bootstrapping sentiment classifier method in order to build more elaborated systems.
Besides the text-transformations, the proposed framework uses a SVM classifier (with linear kernel), and, hyper-parameter optimization using random search and H+M over the space of text-transformations. The experimental results show good overall performance in all international contests considered, and the best results in the other five languages tested.
It is important to note that all the methods that outperformed B4MSA in the sentiment analysis contests use extra knowledge (lexicons included) meanwhile B4MSA uses only the information provided by each contests. In future work, we will extend our methodology to include extra-knowledge in order to improve the performance.
Acknowledgements
We would like to thank Valerio Basile, Julio Villena-Roman, and Preslav Nakov for kindly give us access to the gold-standards of SENTIPOLC'14, TASS'15 and SemEval 2015 & 2016, respectively. The authors also thank Elio Villaseñor for the helpful discussions in early stages of this research. | Total number of annotated data:
Semeval'15: 10712
Semeval'16: 28632
Tass'15: 69000
Sentipol'14: 6428 |
cebf3e07057339047326cb2f8863ee633a62f49f | cebf3e07057339047326cb2f8863ee633a62f49f_0 | Q: In which languages did the approach outperform the reported results?
Text: Introduction
Sentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between English state-of-the-art methods and other languages. It is expected that some researchers decided to test the straightforward approach which consists in, first, translating the messages to English, and, then, use a high performing English sentiment classifier (for instance, see BIBREF0 and BIBREF1 ) instead of creating a sentiment classifier optimized for a given language. However, the advantages of a properly tuned sentiment classifier have been studied for different languages (for instance, see BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ).
This manuscript focuses on the particular case of multilingual sentiment analysis of short informal texts such as Twitter messages. Our aim is to provide an easy-to-use tool to create sentiment classifiers based on supervised learning (i.e., labeled dataset) where the classifier should be competitive to those sentiment classifiers carefully tuned by some given languages. Furthermore, our second contribution is to create a well-performing baseline to compare new sentiment classifiers in a broad range of languages or to bootstrap new sentiment analysis systems. Our approach is based on selecting the best text-transforming techniques that optimize some performance measures where the chosen techniques are robust to typical writing errors.
In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques.
The rest of the manuscript is organized as follows. Section SECREF2 describes our proposed Sentiment Analysis method. Section SECREF3 describes the datasets and contests used to test our approach; whereas, the experimental results, and, the discussion are presented on Section SECREF4 . Finally, Section SECREF5 concludes.
Our Approach: Multilingual Polarity Classification
We propose a method for multilingual polarity classification that can serve as a baseline as well as a framework to build more complex sentiment analysis systems due to its simplicity and availability as an open source software. As we mentioned, this baseline algorithm for multilingual Sentiment Analysis (B4MSA) was designed with the purpose of being multilingual and easy to implement. B4MSA is not a naïve baseline which is experimentally proved by evaluating it on several international competitions.
In a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier. B4MSA uses a number of text transformations that are categorized in cross-language features (see Subsection SECREF3 ) and language dependent features (see Subsection SECREF9 ). It is important to note that, all the text-transformations considered are either simple to implement or there is a well-known library (e.g. BIBREF6 , BIBREF7 ) to use them. It is important to note that to maintain the cross-language property, we limit ourselves to not use additional knowledge, this include knowledge from affective lexicons or models based on distributional semantics.
To obtain the best performance, one needs to select those text-transformations that work best for a particular dataset, therefore, B4MSA uses a simple random search and hill-climbing (see Subsection SECREF14 ) in space of text-transformations to free the user from this delicate and time-consuming task. Before going into the details of each text-transformation, Table TABREF2 gives a summary of the text-transformations used as well as their parameters associated.
Cross-language Features
We defined cross-language features as a set of features that could be applied in most similar languages, not only related language families such as Germanic languages (English, German, etc.), Romance languages (Spanish, Italian, etc.), among others; but also similar surface features such as punctuation, diacritics, symbol duplication, case sensitivity, etc. Later, the combination of these features will be explored to find the best configuration for a given classifier.
Generally, Twitter messages are full of slang, misspelling, typographical and grammatical errors among others; in order to tackle these aspects we consider different parameters to study this effect. The following points are the parameters to be considered as spelling features. Punctuation (del-punc) considers the use of symbols such as question mark, period, exclamation point, commas, among other spelling marks. Diacritic symbols (del-diac) are commonly used in languages such as Spanish, Italian, Russian, etc., and its wrong usage is one of the main sources of orthographic errors in informal texts; this parameter considers the use or absence of diacritical marks. Symbol reduction (del-d1), usually, twitter messages use repeated characters to emphasize parts of the word to attract user's attention. This aspect makes the vocabulary explodes. We applied the strategy of replacing the repeated symbols by one occurrence of the symbol. Case sensitivity (lc) considers letters to be normalized in lowercase or to keep the original source; the aim is to cut the words that are the same in uppercase and lowercase.
We classified around 500 most popular emoticons, included text emoticons, and the whole set of unicode emoticons (around INLINEFORM0 ) defined by BIBREF8 into three classes: positive, negative and neutral, which are grouped under its corresponding polarity word defined by the class name.
Table TABREF6 shows an excerpt of the dictionary that maps emoticons to their corresponding polarity class.
N-words (word sequences) are widely used in many NLP tasks, and they have also been used in Sentiment Analysis BIBREF9 and BIBREF10 . To compute the N-words, the text is tokenized and N-words are calculated from tokens. For example, let INLINEFORM0 be the text, so its 1-words (unigrams) are each word alone, and its 2-words (bigrams) set are the sequences of two words, the set ( INLINEFORM1 ), and so on. INLINEFORM2 = {the lights, lights and, and shadows, shadows of, of your, your future}, so, given text of size INLINEFORM3 words, we obtain a set containing at most INLINEFORM4 elements. Generally, N-words are used up to 2 or 3-words because it is uncommon to find, between texts, good matches of word sequences greater than three or four words BIBREF11 .
In addition to the traditional N-words representation, we represent the resulting text as q-grams. A q-grams is an agnostic language transformation that consists in representing a document by all its substring of length INLINEFORM0 . For example, let INLINEFORM1 be the text, its 3-grams set are INLINEFORM2
so, given text of size INLINEFORM0 characters, we obtain a set with at most INLINEFORM1 elements. Notice that this transformation handles white-spaces as part of the text. Since there will be q-grams connecting words, in some sense, applying q-grams to the entire text can capture part of the syntactic and contextual information in the sentence. The rationale of q-grams is also to tackle misspelled sentences from the approximate pattern matching perspective BIBREF12 .
Language Dependent Features
The following features are language dependent because they use specific information from the language concerned. Usually, the use of stopwords, stemming and negations are traditionally used in Sentiment Analysis. The users of this approach could add other features such as part of speech, affective lexicons, etc. to improve the performance BIBREF13 .
In many languages, there is a set of extremely common words such as determiners or conjunctions ( INLINEFORM0 or INLINEFORM1 ) which help to build sentences but do not carry any meaning for themselves. These words are known as Stopwords, and they are removed from text before any attempt to classify them. Generally, a stopword list is built using the most frequent terms from a huge document collection. We used the Spanish, English and Italian stopword lists included in the NLTK Python package BIBREF6 in order to identify them.
Stemming is a well-known heuristic process in Information Retrieval field that chops off the end of words and often includes the removal of derivational affixes. This technique uses the morphology of the language coded in a set of rules that are applied to find out word stems and reduce the vocabulary collapsing derivationally related words. In our study, we use the Snowball Stemmer for Spanish and Italian, and the Porter Stemmer for English that are implemented in NLTK package BIBREF6 .
Negation markers might change the polarity of the message. Thus, we attached the negation clue to the nearest word, similar to the approaches used in BIBREF9 . A set of rules was designed for common negation structures that involve negation markers for Spanish, English and Italian. For instance, negation markers used for Spanish are no (not), nunca, jamás (never), and sin (without). The rules (regular expressions) are processed in order, and their purpose is to negate the nearest word to the negation marker using only the information on the text, e.g., avoiding mainly pronouns and articles. For example, in the sentence El coche no es bonito (The car is not nice), the negation marker no and not (for English) is attached to its adjective no_bonito (not_nice).
Text Representation
After text-transformations, it is needed to represent the text in suitable form in order to use a traditional classifier such as SVM. It was decided to select the well known vector representation of a text given its simplicity and powerful representation. Particularly, it is used the Term Frequency-Inverse Document Frequency which is a well-known weighting scheme in NLP. TF-IDF computes a weight that represents the importance of words or terms inside a document in a collection of documents, i.e., how frequently they appear across multiple documents. Therefore, common words such as the and in, which appear in many documents, will have a low score, and words that appear frequently in a single document will have high score. This weighting scheme selects the terms that represent a document.
Parameter Optimization
The model selection, sometimes called hyper-parameter optimization, is essential to ensure the performance of a sentiment classifier. In particular, our approach is highly parametric; in fact, we use such property to adapt to several languages. Table TABREF2 summarizes the parameters and their valid values. The search space contains more than 331 thousand configurations when limited to multilingual and language independent parameters; while the search space reaches close to 4 million configurations when we add our three language-dependent parameters. Depending on the size of the training set, each configuration needs several minutes on a commodity server to be evaluated; thus, an exhaustive exploration of the parameter space can be quite expensive making the approach useless in practice. To tackle the efficiency problems, we perform the model selection using two hyper-parameter optimization algorithms.
The first corresponds to Random Search, described in depth in BIBREF14 . Random search consists on randomly sampling the parameter space and select the best configuration among the sample. The second algorithm consists on a Hill Climbing BIBREF15 , BIBREF16 implemented with a memory to avoid testing a configuration twice. The main idea behind hill climbing H+M is to take a pivoting configuration, explore the configuration's neighborhood, and greedily moves to the best neighbor. The process is repeated until no improvement is possible. The configuration neighborhood is defined as the set of configurations such that these differ in just one parameter's value. This rule is strengthened for tokenizer (see Table TABREF2 ) to differ in a single internal value not in the whole parameter value. More precisely, let INLINEFORM0 be a valid value for tokenizer and INLINEFORM1 the set of valid values for neighborhoods of INLINEFORM2 , then INLINEFORM3 and INLINEFORM4 for any INLINEFORM5 .
To guarantee a better or equal performance than random search, the H+M process starts with the best configuration found in the random search. By using H+M, sample size can be set to 32 or 64, as rule of thumb, and even reach improvements in most cases (see § SECREF4 ). Nonetheless, this simplification and performance boosting comes along with possible higher optimization times. Finally, the performance of each configuration is obtained using a cross-validation technique on the training data, and the metrics are usually used in classification such as: accuracy, score INLINEFORM0 , and recall, among others.
Datasets and contests
Nowadays, there are several international competitions related to text mining, which include diverse tasks such as: polarity classification (at different levels), subjectivity classification, entity detection, and iron detection, among others. These competitions are relevant to measure the potential of different proposed techniques. In this case, we focused on polarity classification task, hence, we developed a baseline method with an acceptable performance achieved in three different contests, namely, TASS'15 (Spanish) BIBREF17 , SemEval'15-16 (English) BIBREF18 , BIBREF19 , and SENTIPOLC'14 (Italian) BIBREF20 . In addition, our approach was tested with other languages (Arabic, German, Portuguese, Russian, and Swedish) to show that is feasible to use our framework as basis for building more complex sentiment analysis systems. From these languages, datasets and results can be seen in BIBREF21 , BIBREF3 and BIBREF2 .
Table TABREF15 presents the details of each of the competitions considered as well as the other languages tested. It can be observed, from the table, the number of examples as well as the number of instances for each polarity level, namely, positive, neutral, negative and none. The training and development (only in SemEval) sets are used to train the sentiment classifier, and the gold set is used to test the classifier. In the case there dataset was not split in training and gold (Arabic, German, Portuguese, Russian, and Swedish) then a cross-validation (10 folds) technique is used to test the classifier. The performance of the classifier is presented using different metrics depending the competition. SemEval uses the average of score INLINEFORM0 of positive and negative labels, TASS uses the accuracy and SENTIPOLC uses a custom metric (see BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 ).
Experimental Results
We tested our framework on two kinds of datasets. On one hand, we compare our performance on three languages having well known sentiment analysis contests; here, we compare our work against competitors of those challenges. On the other hand, we selected five languages without popular opinion mining contests; for these languages, we compare our approach against research works reporting the used corpus.
Performance on sentiment analysis contests
Figure FIGREF17 shows the performance on four contests, corresponding to three different languages. The performance corresponds to the multilingual set of features, i.e., we do not used language-dependent techniques.
Figures UID18 - UID21 illustrates the results on each challenge, all competitors are ordered in score's descending order (higher is better). The achieved performance of our approach is marked with a horizontal line on each figure. Figure UID22 briefly describes each challenge and summarizes our performance on each contest; also, we added three standard measures to simplify the insight's creation of the reader.
The winner method in SENTIPOLC'14 (Italian) is reported in BIBREF22 . This method uses three groups of features: keyword and micro-blogging characteristics, Sentiment Lexicons, SentiWordNet and MultiWordNet, and Distributional Semantic Model (DSM) with a SVM classifier. In contrast with our method, in BIBREF22 three external sentiment lexicons dictionaries were employed; that is, external information.
In TASS'15 (Spanish) competition, the winner reported method was BIBREF23 , which proposed an adaptation based on a tokenizer of tweets Tweetmotif BIBREF24 , Freeling BIBREF25 as lemmatizer, entity detector, morphosyntactic labeler and a translation of the Afinn dictionary. In contrast with our method, BIBREF23 employs several complex and expensive tools. In this task we reached the fourteenth position with an accuracy of INLINEFORM0 . Figure UID19 shows the B4MSA performance to be over two thirds of the competitors.
The remaining two contests correspond to the SemEval'15-16. The B4MSA performance in SemEval is depicted in Figures UID20 and UID21 ; here, B4MSA does not perform as well as in other challenges, mainly because, contrary to other challenges, SemEval rules promotes the enrichment of the official training set. To be consistent with the rest of the experiments, B4MSA uses only the official training set. The results can be significantly improved using larger training datasets; for example, joining SemEval'13 and SemEval'16 training sets, we can reach INLINEFORM0 for SemEval'16, which improves the B4MSA's performance (see Table FIGREF17 ).
In SemEval'15, the winner method is BIBREF26 , which combines three approaches among the participants of SemEval'13, teams: NRC-Canada, GU-MLT-LT and KLUE, and from SemEval'14 the participant TeamX all of them employing external information. In SemEval'16, the winner method was BIBREF27 is composed with an ensemble of two subsystems based on convolutional neural networks, the first subsystem is created using 290 million tweets, and the second one is feeded with 150 million tweets. All these tweets were selected from a very large unlabeled dataset through distant supervision techniques.
Table TABREF23 shows the multilingual set of techniques and the set with language-dependent techniques; for each, we optimized the set of parameters through Random Search and INLINEFORM0 (see Subsection SECREF14 ). The reached performance is reported using both cross-validation and the official gold-standard. Please notice how INLINEFORM1 consistently reaches better performances, even on small sampling sizes. The sampling size is indicated with subscripts in Table TABREF23 . Note that, in SemEval challenges, the cross-validation performances are higher than those reached by evaluating the gold-standard, mainly because the gold-standard does not follow the distribution of training set. This can be understood because the rules of SemEval promote the use of external knowledge.
Table TABREF24 compares our performance on five different languages; we do not apply language-dependent techniques. For each comparison, we took a labeled corpus from BIBREF3 (Arabic) and BIBREF21 (the remaining languages). According to author's reports, all tweets were manually labeled by native speakers as pos, neg, or neu. The Arabic dataset contains INLINEFORM0 items; the other datasets contain from 58 thousand tweets to more than 157 thousand tweets. We were able to fetch a fraction of the original datasets; so, we drop the necessary items to hold the original class-population ratio. The ratio of tweets in our training dataset, respect to the original dataset, is indicated beside the name. As before, we evaluate our algorithms through a 10-fold cross validation.
In BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases.
Conclusions
We presented a simple to implement multilingual framework for polarity classification whose main contributions are in two aspects. On one hand, our approach can serve as a baseline to compare other classification systems. It considers techniques for text representation such as spelling features, emoticons, word-based n-grams, character-based q-grams and language dependent features. On the other hand, our approach is a framework for practitioners or researchers looking for a bootstrapping sentiment classifier method in order to build more elaborated systems.
Besides the text-transformations, the proposed framework uses a SVM classifier (with linear kernel), and, hyper-parameter optimization using random search and H+M over the space of text-transformations. The experimental results show good overall performance in all international contests considered, and the best results in the other five languages tested.
It is important to note that all the methods that outperformed B4MSA in the sentiment analysis contests use extra knowledge (lexicons included) meanwhile B4MSA uses only the information provided by each contests. In future work, we will extend our methodology to include extra-knowledge in order to improve the performance.
Acknowledgements
We would like to thank Valerio Basile, Julio Villena-Roman, and Preslav Nakov for kindly give us access to the gold-standards of SENTIPOLC'14, TASS'15 and SemEval 2015 & 2016, respectively. The authors also thank Elio Villaseñor for the helpful discussions in early stages of this research. | Arabic, German, Portuguese, Russian, Swedish |
ef8099e2bc0ac4abc4f8216740e80f2fa22f41f6 | ef8099e2bc0ac4abc4f8216740e80f2fa22f41f6_0 | Q: What eight language are reported on?
Text: Introduction
Sentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between English state-of-the-art methods and other languages. It is expected that some researchers decided to test the straightforward approach which consists in, first, translating the messages to English, and, then, use a high performing English sentiment classifier (for instance, see BIBREF0 and BIBREF1 ) instead of creating a sentiment classifier optimized for a given language. However, the advantages of a properly tuned sentiment classifier have been studied for different languages (for instance, see BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ).
This manuscript focuses on the particular case of multilingual sentiment analysis of short informal texts such as Twitter messages. Our aim is to provide an easy-to-use tool to create sentiment classifiers based on supervised learning (i.e., labeled dataset) where the classifier should be competitive to those sentiment classifiers carefully tuned by some given languages. Furthermore, our second contribution is to create a well-performing baseline to compare new sentiment classifiers in a broad range of languages or to bootstrap new sentiment analysis systems. Our approach is based on selecting the best text-transforming techniques that optimize some performance measures where the chosen techniques are robust to typical writing errors.
In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques.
The rest of the manuscript is organized as follows. Section SECREF2 describes our proposed Sentiment Analysis method. Section SECREF3 describes the datasets and contests used to test our approach; whereas, the experimental results, and, the discussion are presented on Section SECREF4 . Finally, Section SECREF5 concludes.
Our Approach: Multilingual Polarity Classification
We propose a method for multilingual polarity classification that can serve as a baseline as well as a framework to build more complex sentiment analysis systems due to its simplicity and availability as an open source software. As we mentioned, this baseline algorithm for multilingual Sentiment Analysis (B4MSA) was designed with the purpose of being multilingual and easy to implement. B4MSA is not a naïve baseline which is experimentally proved by evaluating it on several international competitions.
In a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier. B4MSA uses a number of text transformations that are categorized in cross-language features (see Subsection SECREF3 ) and language dependent features (see Subsection SECREF9 ). It is important to note that, all the text-transformations considered are either simple to implement or there is a well-known library (e.g. BIBREF6 , BIBREF7 ) to use them. It is important to note that to maintain the cross-language property, we limit ourselves to not use additional knowledge, this include knowledge from affective lexicons or models based on distributional semantics.
To obtain the best performance, one needs to select those text-transformations that work best for a particular dataset, therefore, B4MSA uses a simple random search and hill-climbing (see Subsection SECREF14 ) in space of text-transformations to free the user from this delicate and time-consuming task. Before going into the details of each text-transformation, Table TABREF2 gives a summary of the text-transformations used as well as their parameters associated.
Cross-language Features
We defined cross-language features as a set of features that could be applied in most similar languages, not only related language families such as Germanic languages (English, German, etc.), Romance languages (Spanish, Italian, etc.), among others; but also similar surface features such as punctuation, diacritics, symbol duplication, case sensitivity, etc. Later, the combination of these features will be explored to find the best configuration for a given classifier.
Generally, Twitter messages are full of slang, misspelling, typographical and grammatical errors among others; in order to tackle these aspects we consider different parameters to study this effect. The following points are the parameters to be considered as spelling features. Punctuation (del-punc) considers the use of symbols such as question mark, period, exclamation point, commas, among other spelling marks. Diacritic symbols (del-diac) are commonly used in languages such as Spanish, Italian, Russian, etc., and its wrong usage is one of the main sources of orthographic errors in informal texts; this parameter considers the use or absence of diacritical marks. Symbol reduction (del-d1), usually, twitter messages use repeated characters to emphasize parts of the word to attract user's attention. This aspect makes the vocabulary explodes. We applied the strategy of replacing the repeated symbols by one occurrence of the symbol. Case sensitivity (lc) considers letters to be normalized in lowercase or to keep the original source; the aim is to cut the words that are the same in uppercase and lowercase.
We classified around 500 most popular emoticons, included text emoticons, and the whole set of unicode emoticons (around INLINEFORM0 ) defined by BIBREF8 into three classes: positive, negative and neutral, which are grouped under its corresponding polarity word defined by the class name.
Table TABREF6 shows an excerpt of the dictionary that maps emoticons to their corresponding polarity class.
N-words (word sequences) are widely used in many NLP tasks, and they have also been used in Sentiment Analysis BIBREF9 and BIBREF10 . To compute the N-words, the text is tokenized and N-words are calculated from tokens. For example, let INLINEFORM0 be the text, so its 1-words (unigrams) are each word alone, and its 2-words (bigrams) set are the sequences of two words, the set ( INLINEFORM1 ), and so on. INLINEFORM2 = {the lights, lights and, and shadows, shadows of, of your, your future}, so, given text of size INLINEFORM3 words, we obtain a set containing at most INLINEFORM4 elements. Generally, N-words are used up to 2 or 3-words because it is uncommon to find, between texts, good matches of word sequences greater than three or four words BIBREF11 .
In addition to the traditional N-words representation, we represent the resulting text as q-grams. A q-grams is an agnostic language transformation that consists in representing a document by all its substring of length INLINEFORM0 . For example, let INLINEFORM1 be the text, its 3-grams set are INLINEFORM2
so, given text of size INLINEFORM0 characters, we obtain a set with at most INLINEFORM1 elements. Notice that this transformation handles white-spaces as part of the text. Since there will be q-grams connecting words, in some sense, applying q-grams to the entire text can capture part of the syntactic and contextual information in the sentence. The rationale of q-grams is also to tackle misspelled sentences from the approximate pattern matching perspective BIBREF12 .
Language Dependent Features
The following features are language dependent because they use specific information from the language concerned. Usually, the use of stopwords, stemming and negations are traditionally used in Sentiment Analysis. The users of this approach could add other features such as part of speech, affective lexicons, etc. to improve the performance BIBREF13 .
In many languages, there is a set of extremely common words such as determiners or conjunctions ( INLINEFORM0 or INLINEFORM1 ) which help to build sentences but do not carry any meaning for themselves. These words are known as Stopwords, and they are removed from text before any attempt to classify them. Generally, a stopword list is built using the most frequent terms from a huge document collection. We used the Spanish, English and Italian stopword lists included in the NLTK Python package BIBREF6 in order to identify them.
Stemming is a well-known heuristic process in Information Retrieval field that chops off the end of words and often includes the removal of derivational affixes. This technique uses the morphology of the language coded in a set of rules that are applied to find out word stems and reduce the vocabulary collapsing derivationally related words. In our study, we use the Snowball Stemmer for Spanish and Italian, and the Porter Stemmer for English that are implemented in NLTK package BIBREF6 .
Negation markers might change the polarity of the message. Thus, we attached the negation clue to the nearest word, similar to the approaches used in BIBREF9 . A set of rules was designed for common negation structures that involve negation markers for Spanish, English and Italian. For instance, negation markers used for Spanish are no (not), nunca, jamás (never), and sin (without). The rules (regular expressions) are processed in order, and their purpose is to negate the nearest word to the negation marker using only the information on the text, e.g., avoiding mainly pronouns and articles. For example, in the sentence El coche no es bonito (The car is not nice), the negation marker no and not (for English) is attached to its adjective no_bonito (not_nice).
Text Representation
After text-transformations, it is needed to represent the text in suitable form in order to use a traditional classifier such as SVM. It was decided to select the well known vector representation of a text given its simplicity and powerful representation. Particularly, it is used the Term Frequency-Inverse Document Frequency which is a well-known weighting scheme in NLP. TF-IDF computes a weight that represents the importance of words or terms inside a document in a collection of documents, i.e., how frequently they appear across multiple documents. Therefore, common words such as the and in, which appear in many documents, will have a low score, and words that appear frequently in a single document will have high score. This weighting scheme selects the terms that represent a document.
Parameter Optimization
The model selection, sometimes called hyper-parameter optimization, is essential to ensure the performance of a sentiment classifier. In particular, our approach is highly parametric; in fact, we use such property to adapt to several languages. Table TABREF2 summarizes the parameters and their valid values. The search space contains more than 331 thousand configurations when limited to multilingual and language independent parameters; while the search space reaches close to 4 million configurations when we add our three language-dependent parameters. Depending on the size of the training set, each configuration needs several minutes on a commodity server to be evaluated; thus, an exhaustive exploration of the parameter space can be quite expensive making the approach useless in practice. To tackle the efficiency problems, we perform the model selection using two hyper-parameter optimization algorithms.
The first corresponds to Random Search, described in depth in BIBREF14 . Random search consists on randomly sampling the parameter space and select the best configuration among the sample. The second algorithm consists on a Hill Climbing BIBREF15 , BIBREF16 implemented with a memory to avoid testing a configuration twice. The main idea behind hill climbing H+M is to take a pivoting configuration, explore the configuration's neighborhood, and greedily moves to the best neighbor. The process is repeated until no improvement is possible. The configuration neighborhood is defined as the set of configurations such that these differ in just one parameter's value. This rule is strengthened for tokenizer (see Table TABREF2 ) to differ in a single internal value not in the whole parameter value. More precisely, let INLINEFORM0 be a valid value for tokenizer and INLINEFORM1 the set of valid values for neighborhoods of INLINEFORM2 , then INLINEFORM3 and INLINEFORM4 for any INLINEFORM5 .
To guarantee a better or equal performance than random search, the H+M process starts with the best configuration found in the random search. By using H+M, sample size can be set to 32 or 64, as rule of thumb, and even reach improvements in most cases (see § SECREF4 ). Nonetheless, this simplification and performance boosting comes along with possible higher optimization times. Finally, the performance of each configuration is obtained using a cross-validation technique on the training data, and the metrics are usually used in classification such as: accuracy, score INLINEFORM0 , and recall, among others.
Datasets and contests
Nowadays, there are several international competitions related to text mining, which include diverse tasks such as: polarity classification (at different levels), subjectivity classification, entity detection, and iron detection, among others. These competitions are relevant to measure the potential of different proposed techniques. In this case, we focused on polarity classification task, hence, we developed a baseline method with an acceptable performance achieved in three different contests, namely, TASS'15 (Spanish) BIBREF17 , SemEval'15-16 (English) BIBREF18 , BIBREF19 , and SENTIPOLC'14 (Italian) BIBREF20 . In addition, our approach was tested with other languages (Arabic, German, Portuguese, Russian, and Swedish) to show that is feasible to use our framework as basis for building more complex sentiment analysis systems. From these languages, datasets and results can be seen in BIBREF21 , BIBREF3 and BIBREF2 .
Table TABREF15 presents the details of each of the competitions considered as well as the other languages tested. It can be observed, from the table, the number of examples as well as the number of instances for each polarity level, namely, positive, neutral, negative and none. The training and development (only in SemEval) sets are used to train the sentiment classifier, and the gold set is used to test the classifier. In the case there dataset was not split in training and gold (Arabic, German, Portuguese, Russian, and Swedish) then a cross-validation (10 folds) technique is used to test the classifier. The performance of the classifier is presented using different metrics depending the competition. SemEval uses the average of score INLINEFORM0 of positive and negative labels, TASS uses the accuracy and SENTIPOLC uses a custom metric (see BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 ).
Experimental Results
We tested our framework on two kinds of datasets. On one hand, we compare our performance on three languages having well known sentiment analysis contests; here, we compare our work against competitors of those challenges. On the other hand, we selected five languages without popular opinion mining contests; for these languages, we compare our approach against research works reporting the used corpus.
Performance on sentiment analysis contests
Figure FIGREF17 shows the performance on four contests, corresponding to three different languages. The performance corresponds to the multilingual set of features, i.e., we do not used language-dependent techniques.
Figures UID18 - UID21 illustrates the results on each challenge, all competitors are ordered in score's descending order (higher is better). The achieved performance of our approach is marked with a horizontal line on each figure. Figure UID22 briefly describes each challenge and summarizes our performance on each contest; also, we added three standard measures to simplify the insight's creation of the reader.
The winner method in SENTIPOLC'14 (Italian) is reported in BIBREF22 . This method uses three groups of features: keyword and micro-blogging characteristics, Sentiment Lexicons, SentiWordNet and MultiWordNet, and Distributional Semantic Model (DSM) with a SVM classifier. In contrast with our method, in BIBREF22 three external sentiment lexicons dictionaries were employed; that is, external information.
In TASS'15 (Spanish) competition, the winner reported method was BIBREF23 , which proposed an adaptation based on a tokenizer of tweets Tweetmotif BIBREF24 , Freeling BIBREF25 as lemmatizer, entity detector, morphosyntactic labeler and a translation of the Afinn dictionary. In contrast with our method, BIBREF23 employs several complex and expensive tools. In this task we reached the fourteenth position with an accuracy of INLINEFORM0 . Figure UID19 shows the B4MSA performance to be over two thirds of the competitors.
The remaining two contests correspond to the SemEval'15-16. The B4MSA performance in SemEval is depicted in Figures UID20 and UID21 ; here, B4MSA does not perform as well as in other challenges, mainly because, contrary to other challenges, SemEval rules promotes the enrichment of the official training set. To be consistent with the rest of the experiments, B4MSA uses only the official training set. The results can be significantly improved using larger training datasets; for example, joining SemEval'13 and SemEval'16 training sets, we can reach INLINEFORM0 for SemEval'16, which improves the B4MSA's performance (see Table FIGREF17 ).
In SemEval'15, the winner method is BIBREF26 , which combines three approaches among the participants of SemEval'13, teams: NRC-Canada, GU-MLT-LT and KLUE, and from SemEval'14 the participant TeamX all of them employing external information. In SemEval'16, the winner method was BIBREF27 is composed with an ensemble of two subsystems based on convolutional neural networks, the first subsystem is created using 290 million tweets, and the second one is feeded with 150 million tweets. All these tweets were selected from a very large unlabeled dataset through distant supervision techniques.
Table TABREF23 shows the multilingual set of techniques and the set with language-dependent techniques; for each, we optimized the set of parameters through Random Search and INLINEFORM0 (see Subsection SECREF14 ). The reached performance is reported using both cross-validation and the official gold-standard. Please notice how INLINEFORM1 consistently reaches better performances, even on small sampling sizes. The sampling size is indicated with subscripts in Table TABREF23 . Note that, in SemEval challenges, the cross-validation performances are higher than those reached by evaluating the gold-standard, mainly because the gold-standard does not follow the distribution of training set. This can be understood because the rules of SemEval promote the use of external knowledge.
Table TABREF24 compares our performance on five different languages; we do not apply language-dependent techniques. For each comparison, we took a labeled corpus from BIBREF3 (Arabic) and BIBREF21 (the remaining languages). According to author's reports, all tweets were manually labeled by native speakers as pos, neg, or neu. The Arabic dataset contains INLINEFORM0 items; the other datasets contain from 58 thousand tweets to more than 157 thousand tweets. We were able to fetch a fraction of the original datasets; so, we drop the necessary items to hold the original class-population ratio. The ratio of tweets in our training dataset, respect to the original dataset, is indicated beside the name. As before, we evaluate our algorithms through a 10-fold cross validation.
In BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases.
Conclusions
We presented a simple to implement multilingual framework for polarity classification whose main contributions are in two aspects. On one hand, our approach can serve as a baseline to compare other classification systems. It considers techniques for text representation such as spelling features, emoticons, word-based n-grams, character-based q-grams and language dependent features. On the other hand, our approach is a framework for practitioners or researchers looking for a bootstrapping sentiment classifier method in order to build more elaborated systems.
Besides the text-transformations, the proposed framework uses a SVM classifier (with linear kernel), and, hyper-parameter optimization using random search and H+M over the space of text-transformations. The experimental results show good overall performance in all international contests considered, and the best results in the other five languages tested.
It is important to note that all the methods that outperformed B4MSA in the sentiment analysis contests use extra knowledge (lexicons included) meanwhile B4MSA uses only the information provided by each contests. In future work, we will extend our methodology to include extra-knowledge in order to improve the performance.
Acknowledgements
We would like to thank Valerio Basile, Julio Villena-Roman, and Preslav Nakov for kindly give us access to the gold-standards of SENTIPOLC'14, TASS'15 and SemEval 2015 & 2016, respectively. The authors also thank Elio Villaseñor for the helpful discussions in early stages of this research. | Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish |
1e68a1232ab09b6bff506e442acc8ad742972102 | 1e68a1232ab09b6bff506e442acc8ad742972102_0 | Q: What are the components of the multilingual framework?
Text: Introduction
Sentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between English state-of-the-art methods and other languages. It is expected that some researchers decided to test the straightforward approach which consists in, first, translating the messages to English, and, then, use a high performing English sentiment classifier (for instance, see BIBREF0 and BIBREF1 ) instead of creating a sentiment classifier optimized for a given language. However, the advantages of a properly tuned sentiment classifier have been studied for different languages (for instance, see BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ).
This manuscript focuses on the particular case of multilingual sentiment analysis of short informal texts such as Twitter messages. Our aim is to provide an easy-to-use tool to create sentiment classifiers based on supervised learning (i.e., labeled dataset) where the classifier should be competitive to those sentiment classifiers carefully tuned by some given languages. Furthermore, our second contribution is to create a well-performing baseline to compare new sentiment classifiers in a broad range of languages or to bootstrap new sentiment analysis systems. Our approach is based on selecting the best text-transforming techniques that optimize some performance measures where the chosen techniques are robust to typical writing errors.
In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques.
The rest of the manuscript is organized as follows. Section SECREF2 describes our proposed Sentiment Analysis method. Section SECREF3 describes the datasets and contests used to test our approach; whereas, the experimental results, and, the discussion are presented on Section SECREF4 . Finally, Section SECREF5 concludes.
Our Approach: Multilingual Polarity Classification
We propose a method for multilingual polarity classification that can serve as a baseline as well as a framework to build more complex sentiment analysis systems due to its simplicity and availability as an open source software. As we mentioned, this baseline algorithm for multilingual Sentiment Analysis (B4MSA) was designed with the purpose of being multilingual and easy to implement. B4MSA is not a naïve baseline which is experimentally proved by evaluating it on several international competitions.
In a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier. B4MSA uses a number of text transformations that are categorized in cross-language features (see Subsection SECREF3 ) and language dependent features (see Subsection SECREF9 ). It is important to note that, all the text-transformations considered are either simple to implement or there is a well-known library (e.g. BIBREF6 , BIBREF7 ) to use them. It is important to note that to maintain the cross-language property, we limit ourselves to not use additional knowledge, this include knowledge from affective lexicons or models based on distributional semantics.
To obtain the best performance, one needs to select those text-transformations that work best for a particular dataset, therefore, B4MSA uses a simple random search and hill-climbing (see Subsection SECREF14 ) in space of text-transformations to free the user from this delicate and time-consuming task. Before going into the details of each text-transformation, Table TABREF2 gives a summary of the text-transformations used as well as their parameters associated.
Cross-language Features
We defined cross-language features as a set of features that could be applied in most similar languages, not only related language families such as Germanic languages (English, German, etc.), Romance languages (Spanish, Italian, etc.), among others; but also similar surface features such as punctuation, diacritics, symbol duplication, case sensitivity, etc. Later, the combination of these features will be explored to find the best configuration for a given classifier.
Generally, Twitter messages are full of slang, misspelling, typographical and grammatical errors among others; in order to tackle these aspects we consider different parameters to study this effect. The following points are the parameters to be considered as spelling features. Punctuation (del-punc) considers the use of symbols such as question mark, period, exclamation point, commas, among other spelling marks. Diacritic symbols (del-diac) are commonly used in languages such as Spanish, Italian, Russian, etc., and its wrong usage is one of the main sources of orthographic errors in informal texts; this parameter considers the use or absence of diacritical marks. Symbol reduction (del-d1), usually, twitter messages use repeated characters to emphasize parts of the word to attract user's attention. This aspect makes the vocabulary explodes. We applied the strategy of replacing the repeated symbols by one occurrence of the symbol. Case sensitivity (lc) considers letters to be normalized in lowercase or to keep the original source; the aim is to cut the words that are the same in uppercase and lowercase.
We classified around 500 most popular emoticons, included text emoticons, and the whole set of unicode emoticons (around INLINEFORM0 ) defined by BIBREF8 into three classes: positive, negative and neutral, which are grouped under its corresponding polarity word defined by the class name.
Table TABREF6 shows an excerpt of the dictionary that maps emoticons to their corresponding polarity class.
N-words (word sequences) are widely used in many NLP tasks, and they have also been used in Sentiment Analysis BIBREF9 and BIBREF10 . To compute the N-words, the text is tokenized and N-words are calculated from tokens. For example, let INLINEFORM0 be the text, so its 1-words (unigrams) are each word alone, and its 2-words (bigrams) set are the sequences of two words, the set ( INLINEFORM1 ), and so on. INLINEFORM2 = {the lights, lights and, and shadows, shadows of, of your, your future}, so, given text of size INLINEFORM3 words, we obtain a set containing at most INLINEFORM4 elements. Generally, N-words are used up to 2 or 3-words because it is uncommon to find, between texts, good matches of word sequences greater than three or four words BIBREF11 .
In addition to the traditional N-words representation, we represent the resulting text as q-grams. A q-grams is an agnostic language transformation that consists in representing a document by all its substring of length INLINEFORM0 . For example, let INLINEFORM1 be the text, its 3-grams set are INLINEFORM2
so, given text of size INLINEFORM0 characters, we obtain a set with at most INLINEFORM1 elements. Notice that this transformation handles white-spaces as part of the text. Since there will be q-grams connecting words, in some sense, applying q-grams to the entire text can capture part of the syntactic and contextual information in the sentence. The rationale of q-grams is also to tackle misspelled sentences from the approximate pattern matching perspective BIBREF12 .
Language Dependent Features
The following features are language dependent because they use specific information from the language concerned. Usually, the use of stopwords, stemming and negations are traditionally used in Sentiment Analysis. The users of this approach could add other features such as part of speech, affective lexicons, etc. to improve the performance BIBREF13 .
In many languages, there is a set of extremely common words such as determiners or conjunctions ( INLINEFORM0 or INLINEFORM1 ) which help to build sentences but do not carry any meaning for themselves. These words are known as Stopwords, and they are removed from text before any attempt to classify them. Generally, a stopword list is built using the most frequent terms from a huge document collection. We used the Spanish, English and Italian stopword lists included in the NLTK Python package BIBREF6 in order to identify them.
Stemming is a well-known heuristic process in Information Retrieval field that chops off the end of words and often includes the removal of derivational affixes. This technique uses the morphology of the language coded in a set of rules that are applied to find out word stems and reduce the vocabulary collapsing derivationally related words. In our study, we use the Snowball Stemmer for Spanish and Italian, and the Porter Stemmer for English that are implemented in NLTK package BIBREF6 .
Negation markers might change the polarity of the message. Thus, we attached the negation clue to the nearest word, similar to the approaches used in BIBREF9 . A set of rules was designed for common negation structures that involve negation markers for Spanish, English and Italian. For instance, negation markers used for Spanish are no (not), nunca, jamás (never), and sin (without). The rules (regular expressions) are processed in order, and their purpose is to negate the nearest word to the negation marker using only the information on the text, e.g., avoiding mainly pronouns and articles. For example, in the sentence El coche no es bonito (The car is not nice), the negation marker no and not (for English) is attached to its adjective no_bonito (not_nice).
Text Representation
After text-transformations, it is needed to represent the text in suitable form in order to use a traditional classifier such as SVM. It was decided to select the well known vector representation of a text given its simplicity and powerful representation. Particularly, it is used the Term Frequency-Inverse Document Frequency which is a well-known weighting scheme in NLP. TF-IDF computes a weight that represents the importance of words or terms inside a document in a collection of documents, i.e., how frequently they appear across multiple documents. Therefore, common words such as the and in, which appear in many documents, will have a low score, and words that appear frequently in a single document will have high score. This weighting scheme selects the terms that represent a document.
Parameter Optimization
The model selection, sometimes called hyper-parameter optimization, is essential to ensure the performance of a sentiment classifier. In particular, our approach is highly parametric; in fact, we use such property to adapt to several languages. Table TABREF2 summarizes the parameters and their valid values. The search space contains more than 331 thousand configurations when limited to multilingual and language independent parameters; while the search space reaches close to 4 million configurations when we add our three language-dependent parameters. Depending on the size of the training set, each configuration needs several minutes on a commodity server to be evaluated; thus, an exhaustive exploration of the parameter space can be quite expensive making the approach useless in practice. To tackle the efficiency problems, we perform the model selection using two hyper-parameter optimization algorithms.
The first corresponds to Random Search, described in depth in BIBREF14 . Random search consists on randomly sampling the parameter space and select the best configuration among the sample. The second algorithm consists on a Hill Climbing BIBREF15 , BIBREF16 implemented with a memory to avoid testing a configuration twice. The main idea behind hill climbing H+M is to take a pivoting configuration, explore the configuration's neighborhood, and greedily moves to the best neighbor. The process is repeated until no improvement is possible. The configuration neighborhood is defined as the set of configurations such that these differ in just one parameter's value. This rule is strengthened for tokenizer (see Table TABREF2 ) to differ in a single internal value not in the whole parameter value. More precisely, let INLINEFORM0 be a valid value for tokenizer and INLINEFORM1 the set of valid values for neighborhoods of INLINEFORM2 , then INLINEFORM3 and INLINEFORM4 for any INLINEFORM5 .
To guarantee a better or equal performance than random search, the H+M process starts with the best configuration found in the random search. By using H+M, sample size can be set to 32 or 64, as rule of thumb, and even reach improvements in most cases (see § SECREF4 ). Nonetheless, this simplification and performance boosting comes along with possible higher optimization times. Finally, the performance of each configuration is obtained using a cross-validation technique on the training data, and the metrics are usually used in classification such as: accuracy, score INLINEFORM0 , and recall, among others.
Datasets and contests
Nowadays, there are several international competitions related to text mining, which include diverse tasks such as: polarity classification (at different levels), subjectivity classification, entity detection, and iron detection, among others. These competitions are relevant to measure the potential of different proposed techniques. In this case, we focused on polarity classification task, hence, we developed a baseline method with an acceptable performance achieved in three different contests, namely, TASS'15 (Spanish) BIBREF17 , SemEval'15-16 (English) BIBREF18 , BIBREF19 , and SENTIPOLC'14 (Italian) BIBREF20 . In addition, our approach was tested with other languages (Arabic, German, Portuguese, Russian, and Swedish) to show that is feasible to use our framework as basis for building more complex sentiment analysis systems. From these languages, datasets and results can be seen in BIBREF21 , BIBREF3 and BIBREF2 .
Table TABREF15 presents the details of each of the competitions considered as well as the other languages tested. It can be observed, from the table, the number of examples as well as the number of instances for each polarity level, namely, positive, neutral, negative and none. The training and development (only in SemEval) sets are used to train the sentiment classifier, and the gold set is used to test the classifier. In the case there dataset was not split in training and gold (Arabic, German, Portuguese, Russian, and Swedish) then a cross-validation (10 folds) technique is used to test the classifier. The performance of the classifier is presented using different metrics depending the competition. SemEval uses the average of score INLINEFORM0 of positive and negative labels, TASS uses the accuracy and SENTIPOLC uses a custom metric (see BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 ).
Experimental Results
We tested our framework on two kinds of datasets. On one hand, we compare our performance on three languages having well known sentiment analysis contests; here, we compare our work against competitors of those challenges. On the other hand, we selected five languages without popular opinion mining contests; for these languages, we compare our approach against research works reporting the used corpus.
Performance on sentiment analysis contests
Figure FIGREF17 shows the performance on four contests, corresponding to three different languages. The performance corresponds to the multilingual set of features, i.e., we do not used language-dependent techniques.
Figures UID18 - UID21 illustrates the results on each challenge, all competitors are ordered in score's descending order (higher is better). The achieved performance of our approach is marked with a horizontal line on each figure. Figure UID22 briefly describes each challenge and summarizes our performance on each contest; also, we added three standard measures to simplify the insight's creation of the reader.
The winner method in SENTIPOLC'14 (Italian) is reported in BIBREF22 . This method uses three groups of features: keyword and micro-blogging characteristics, Sentiment Lexicons, SentiWordNet and MultiWordNet, and Distributional Semantic Model (DSM) with a SVM classifier. In contrast with our method, in BIBREF22 three external sentiment lexicons dictionaries were employed; that is, external information.
In TASS'15 (Spanish) competition, the winner reported method was BIBREF23 , which proposed an adaptation based on a tokenizer of tweets Tweetmotif BIBREF24 , Freeling BIBREF25 as lemmatizer, entity detector, morphosyntactic labeler and a translation of the Afinn dictionary. In contrast with our method, BIBREF23 employs several complex and expensive tools. In this task we reached the fourteenth position with an accuracy of INLINEFORM0 . Figure UID19 shows the B4MSA performance to be over two thirds of the competitors.
The remaining two contests correspond to the SemEval'15-16. The B4MSA performance in SemEval is depicted in Figures UID20 and UID21 ; here, B4MSA does not perform as well as in other challenges, mainly because, contrary to other challenges, SemEval rules promotes the enrichment of the official training set. To be consistent with the rest of the experiments, B4MSA uses only the official training set. The results can be significantly improved using larger training datasets; for example, joining SemEval'13 and SemEval'16 training sets, we can reach INLINEFORM0 for SemEval'16, which improves the B4MSA's performance (see Table FIGREF17 ).
In SemEval'15, the winner method is BIBREF26 , which combines three approaches among the participants of SemEval'13, teams: NRC-Canada, GU-MLT-LT and KLUE, and from SemEval'14 the participant TeamX all of them employing external information. In SemEval'16, the winner method was BIBREF27 is composed with an ensemble of two subsystems based on convolutional neural networks, the first subsystem is created using 290 million tweets, and the second one is feeded with 150 million tweets. All these tweets were selected from a very large unlabeled dataset through distant supervision techniques.
Table TABREF23 shows the multilingual set of techniques and the set with language-dependent techniques; for each, we optimized the set of parameters through Random Search and INLINEFORM0 (see Subsection SECREF14 ). The reached performance is reported using both cross-validation and the official gold-standard. Please notice how INLINEFORM1 consistently reaches better performances, even on small sampling sizes. The sampling size is indicated with subscripts in Table TABREF23 . Note that, in SemEval challenges, the cross-validation performances are higher than those reached by evaluating the gold-standard, mainly because the gold-standard does not follow the distribution of training set. This can be understood because the rules of SemEval promote the use of external knowledge.
Table TABREF24 compares our performance on five different languages; we do not apply language-dependent techniques. For each comparison, we took a labeled corpus from BIBREF3 (Arabic) and BIBREF21 (the remaining languages). According to author's reports, all tweets were manually labeled by native speakers as pos, neg, or neu. The Arabic dataset contains INLINEFORM0 items; the other datasets contain from 58 thousand tweets to more than 157 thousand tweets. We were able to fetch a fraction of the original datasets; so, we drop the necessary items to hold the original class-population ratio. The ratio of tweets in our training dataset, respect to the original dataset, is indicated beside the name. As before, we evaluate our algorithms through a 10-fold cross validation.
In BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases.
Conclusions
We presented a simple to implement multilingual framework for polarity classification whose main contributions are in two aspects. On one hand, our approach can serve as a baseline to compare other classification systems. It considers techniques for text representation such as spelling features, emoticons, word-based n-grams, character-based q-grams and language dependent features. On the other hand, our approach is a framework for practitioners or researchers looking for a bootstrapping sentiment classifier method in order to build more elaborated systems.
Besides the text-transformations, the proposed framework uses a SVM classifier (with linear kernel), and, hyper-parameter optimization using random search and H+M over the space of text-transformations. The experimental results show good overall performance in all international contests considered, and the best results in the other five languages tested.
It is important to note that all the methods that outperformed B4MSA in the sentiment analysis contests use extra knowledge (lexicons included) meanwhile B4MSA uses only the information provided by each contests. In future work, we will extend our methodology to include extra-knowledge in order to improve the performance.
Acknowledgements
We would like to thank Valerio Basile, Julio Villena-Roman, and Preslav Nakov for kindly give us access to the gold-standards of SENTIPOLC'14, TASS'15 and SemEval 2015 & 2016, respectively. The authors also thank Elio Villaseñor for the helpful discussions in early stages of this research. | text-transformations to the messages, vector space model, Support Vector Machine |
0bd992a6a218331aa771d922e3c7bb60b653949a | 0bd992a6a218331aa771d922e3c7bb60b653949a_0 | Q: Is the proposed method compared to previous methods?
Text: Introduction
Bayesian inference of phylogeny has great impact on evolutionary biology. It is believed that all the species are related through a history of a common descent BIBREF0, that is to say, the reason we have various wildlife, including human beings, is because of evolution. We can show the process of evolution and solve the problems, like what is the phylogeny of life, by showing a phylogenetic tree (see Figure FIGREF1).
As a matter of fact, any research of DNA sequences or protein pattern taken from the species or an individual wildlife can start with a phylogenetic analysis. Language change, which is regarded as just like life evolution, can similarly borrow the phylogenetic analysis to discover the process in which how languages change. It is accepted that all human languages start from the proto-language which is the ancestor of modern languages. For example, Germanic languages (like English, German, Dutch), Romance languages (like Spanish, French and Portuguese), Slavic languages (like Russian, Bosnian and Polish) are from Indo-European languages, which was developed from proto-Indo-European languages. Therefore, we can borrow the computational method from biology and evolution to apply to the language topology.
Not only Natural Language Processing (NLP), more and more linguistic branch disciplines begin to have digital datasets and use computational methods to solve some problems. Historical linguistics recently have dramatically increasing digital datasets. The availability of new data and researches of different language from different language families start to challenge the traditional way to carry on the study of historical linguistics. The comparative method has been the core method for linguistic reconstruction for the past 200 years, and is based on manually identifying systematic phonetic correspondences between many words in pair of languages BIBREF2, because different culture has different writing system and different way to spell their words. That is why International Phonetic Alphabet was created to help linguists analyze different languages in parallel. However, there are too few scholars, i.e., historical linguists, to analyze the world's over 7500 type of languages BIBREF2, BIBREF3, including thousands of languages that have not been studied and are facing extinction. Thereafter, computational methods can help people do researches on unstudied languages faster and give a possible solution. Phylogenetic inference of human languages task is composed with two parts: cognate set detection and phylogenetic tree construction. Cognate set detection automatically assists people put language words with similar or possible evolutionary patterns to one cluster. The phylogenetic tree construction task build trees given the information from the clusters. In the following, I will divided the whole paper into two main steps: the way I implement cognate detection would be discussed in section 2. After that, I will use the cluster data to carry on the phylogenetic inference program and build the relationship trees, which I will describe the way that how I finished this part in section 3. I will show the results and evaluate them in section 4, and make a conclusion in the last section 5.
Cognate Detection
A great number of algorithms and mechanisms to antomatic cognate detection which could be applied in historical linguistics have been used and tested if they are working by many linguists and computer scientists BIBREF2, BIBREF4, BIBREF5, BIBREF6, BIBREF7. In detail, many of these works are very similar to each other, which consist of two main stages. For the first stage, they first extract the words with same meaning from the wordlists of different languages, either same or different language families, and compare them and use the distance calculation matrix to compute how similar they are. Regarding the second stage, a flat cluster algorithm or a network partitioning algorithm is used to partition all words into cognate sets, and also take the information in the matrix of word pairs as basis BIBREF4, BIBREF5. However, the methods in which those researchers use to compare the word pairs are totally different in that people could use different methods to pre-process their language datasets, or even use different algorithms to finish the comparison and clustering task. For example, intuitively, people will start the automated word comparison by computing the distance between the words, such as word embedding in NLP, GloVe BIBREF8, which computes the semantic similarities between the two words. In computational historical linguistics, phonetic segments are used to calculate how close the two words are instead, because the semantics of a single word is not changing as easy as phonetics is. The problem is since the program involves the data pre-processing, then the whole dataset would be traverse twice and the computation would be a big problem when the dataset is about to be over 100 languages. Consequently, people began to think about a faster method to help.
Cognate Detection ::: Consonant Class Matching Algorithm (CCM)
The first linear time method was proposed by BIBREF9, and later modified by BIBREF6. The algorithm compares the word pairs by their consonant class. A consonant class is hereby understood as a rough partitioning of speech sounds into groups that are conveniently used by historical linguists when comparing languages BIBREF4. Table TABREF3 shows some of the international phonetic alphabet (IPA) (for more details about IPA, please refer to BIBREF10). After getting the IPA of the word pairs from the wordlist, the algorithm is to determine if they are cognate by judging their first two consonants class match each other or not. However, since the algorithm only compares the first two consonant classes, the accuracy of this algorithm is relatively low. I have two reasons for this: (a) In linguistics, the number of possible sounds in human languages in the world, excluding artificial languages, amounts to the thousands BIBREF5. It is unrealistic to enroll all the sounds into the system. If we enroll all the sounds in the algorithm to simulate the language change process, we need to get a new matrix in which the probabilities of one sound switching to the other sound will be calculated, which is very time-consuming. (b) comparing the first two consonant classes are not sufficient to determine if the two words in pair are derived from the same cognate word.
Cognate Detection ::: Edit Distance
The Edit Distance approach is to take the normalized Levenshtein distance BIBREF11, which is a concept in information theory. It aims to measure the difference of two sequences by calculating the minimum number of character or string edits, such as insertion, deletion, which are coincidentally two basic language phonetic change. The distance could be used as a probability to estimate how possible one word changes to the other one.
Cognate Detection ::: Sound Class Algorithm (SCA)
This algorithm is for pairwise and multiple alignment analysis BIBREF2. It not only takes an expanded sound class into account, but it considers the prosodic aspect of each word. As a result, it is able to align within morpheme boundaries instead of the sound segments, suppose the morpheme information is the prior knowledge and we already have it.
Cognate Detection ::: LexStat
The previous three methods use the same strategy to put the words from different languages into clusters, i.e., UPGMA clustering algorithm, while LexStat uses language-specific scoring schemes which are derived from a Monte-Carlo permutation of the data BIBREF5. The word pairs from different languages are first aligned and scored, and the MC permutation shuffled the word pairs according to their scores. The scores could be calculated by the frequencies of daily use by native speakers. Thus, a distribution of sound-correspondence frequencies is generated. Then, the distribution is used to compare with an attested distribution and then converted into a language-specific scoring scheme for all word pairs.
Following the algorithms above, with the consideration of both the advantages and disadvantages of them, in this project, I am going to use a modified method: sound-class based skip-grams with bipartite networks (BipSkip). The whole procedure is quite straightforward and could be divided into three steps. First step: the word pair and their skip-grams are two sets of the bipartite networks. The second step is optional, which is to refine the bipartite network. Before I run the program, I will be asked to input a threshold, which determines if the program should delete the skip-gram nodes linked to fewer word nodes than the threshold itself. According to the experiment, even though I did not input any threshold as one of the parameters, the algorithm could still give the same answer but with more executing time. In the last step, the final generated bipartite graph would be connected to a monopartite graph and partitioned into cognate sets with the help of graph partitioning algorithms. Here I would use Informap algorithm BIBREF12. To make a comparison to this method, I am using CCM and SCA for distance measurement in this experiment, too. UPGMA algorithm would be used accordingly in these two cases.
Bayesian Phylogenetic Inference
Methods for Bayesian phylogenetic inference in evolutionary biology and historical linguistics are all based on the following Bayes rule BIBREF4, BIBREF13:
or
Where $f$ means the probability density function, $\Lambda $ consists of the tree topology $\tau $, the branch length vector of the tree $T$ and the substitution model parameter $\theta $; $X$ is a data matrix with dimension $N*K$, within which there are $N$ rows indicating $N$ different words from different kinds of languages and they can be put into $K$ cluster in a language family. Figure FIGREF8 shows an example of the matrix. As we can see, the data matrix is a binary matrix with every element $i_{ij}$. 1 means language $i$ could be classified into cognate $j$, while 0 means language $i$ does not belong to cluster $j$. Based on the shape of the data matrix, to get the parameter ($\Lambda = \tau , X, \theta $) for tree generation, we need to sum over the whole matrix and will have to compute all possible topologies of $\frac{(2N-3)!}{2^{N-2}(N-2)!}$.
This method is practical only for a small amount of dataset and the calculation is not easy once it is over 5 languages BIBREF13. In addition, based on the Bayesian formula, if we use prior probability $Pr(tree)$ and likelihood $Pr(Data|Tree)$ to produce the posterior probability, it seems that the posterior probability is very easy and convenient to formulate based on the Bayesian formula, but to compute this probability and get the greatest estimate of the final tree, the machine has to compute and compare all the possible trees and in each tree, the machine will compute all the combination of branches with different length.
Metropolis-Hastings (MH) algorithm BIBREF14 , one of the Markov Chain Monte Carlo techniques, would be a good tool to overcome this computation difficulty by avoiding summing over all of the topologies by evaluating the posterior probability $f(\Lambda |X)$. The likelihood in this case from one data to the next parameter is calculated by the prunning algorithm. Prunning algorithm, or we can call it K81 model BIBREF15, is a Markov model of DNA evolution, and this model is a description of the DNA in evolution as a string of four discrete state, i.e., G, C, A, T. Fortunately, language model is very similar to DNA model in that both of them are discrete models , in the language model, we only apply this model in the binary dataset.
Bayesian Phylogenetic Inference ::: Markov Chain Monte Carlo (MCMC)
MCMC method is helpful here since it generates a probability distribution $\pi =\lbrace \pi _{i}\rbrace , i=0,1,...,$. We construct a Markov chain with the stationary distribution being posterior probability distribution of the parameters. A new tree will be proposed by stochastically perturbing the current tree and the new tree would be accepted or rejected. If the new tree is accepted, then repeated the algorithm, which is subjected to the future perturbation. If this Markov chain is constructed and configured properly, the proportion of the time that any of the trees is visited is a valid approximation of the posterior probability of the tree BIBREF16. Besides, $\pi _{i}$ is sometimes still not easy to calculate, but we can get the function $\pi = \frac{\pi _{j}}{\pi _{i}}$ directly instead. MH algorithm is the method that could help us solve this function. This algorithm tells us that given sample states $i,j\in \Omega $, and every state $i\in \Omega $ has a distribution $\pi =\lbrace \pi _{i}\rbrace $. There is a potential transition from state $i$ to $j$ and the transition probability is $g_{ij}$, while the acceptance transition probability, meaning a move from state $i$ to $j$, is $\alpha _{ij}$. We can easily get the properties as shown below:
Therefore, the chain $\textbf {P}=\lbrace p_{ij}\rbrace $ has $\pi $ as its stationary distribution and it satisfies
A sufficient solution for this chain to generate $\pi $ as its stationary distribution is that $\lbrace g_{ij}\rbrace $ be irreducible and aperiodic BIBREF13, BIBREF17. MH algorithm aims to sample parameters from the posterior distribution, which could help us generate the posterior distribution of phylogenetic trees and each state of this Markov Chain is probably labelled as cognate set. We can get the simple relationship as $\pi _{i}=f(\tau =i|X)$. To calculate $\pi = \frac{\pi _{j}}{\pi _{i}}$, we have ratio as shown below:
We can have $\alpha _{ij}$
We put $\alpha _{ij}$ the method and have algorithm 1. Under this method, the Markov chain can finally work and the time is shortened greatly, but due to the large scale of the dataset, the new problem is the chain has a large probability that it can get stuck in a local maxima if the posterior probability density function has multiple peaks.
To solve this problem, BIBREF18 proposed Metropolis-coupled Markov Chain Monte-Carlo methods (MC3) aiming to solve the problem when Markov chain has to move across a distribution with multiple peaks, which requires $n$ chains run simultaneously. In the iterations, we can get $n$ different stationary distributions $\pi _{n}$, but only the first one $\pi _{1}$ would be the target density and the rest of them would be treated for improving mixing. To 'heat' some of these chains, the posterior probability of the corresponding chains are raised to the power of $\beta $. For instance, if the probability density function is $f(\Lambda |X)$, then the heated distribution is $f(\Lambda |X)^{\beta }$, where $\beta \in (0,1)$. Once we get a heated version of Markov chain, this technique actually make new states easily get accepted. In comparison, we can have the heated version of the chain:
In the ratio, if $f(\Lambda |X) > f(\Lambda ^{\prime }|x)$, this will trigger the increase of each state to the power of $\beta $. As a result, the heated chain is more likely to accept new states than other cold chains. The MC3 algorithm needs very expensive computation, since it has to work with multiple CPU cores to run at the same time.
Based on the information above, to avoid both shortcomings with cheaper computation, I am going to use MH algorithm with simulated annealing BIBREF19. This method is one of the modified version of MH algorithm with change of the acceptance ratio, similar to MC3 but only with single CPU. We are going to use the equation, where $T_{i}$ represents temperature.
In this method, I will have an initial temperature $T_{0}$, which was set very high at the beginning, indicating heated Markov chain and could be decreased in the following steps until it is set to be $T_{i}\rightarrow 0$. Regarding the change of $T_{i}$ value, i.e., to cool down the Markov chain, a cool-down algorithm is inserted in the method. According to BIBREF19, a linear cool-down algorithm in which the $T_{0}$ initial value was extremely high at the very beginning and the algorithm decreases it in every iteration in $T_{i} = \lambda T_{i-1}$ with $\lambda $ being set between $[0.85,0.96]$ BIBREF20. In the experiment, I reduce the temperature to cool down the chain until $T_{i} = 10^{-5}$. The final state of this whole Markov chain is regarded to be the maximuma-posterior (MAP) estimate of the inference. The Bayesian analysis in this project uses binary datasets with all states 0 and 1. The Generalized Time Reversible Model (GTR model) BIBREF21, BIBREF22 is employed to compute the transition probabilities between states. When I build the phylogenetic trees, the tree branch length and also the shape of the tree are continuous variables, a move satisfying exponential distribution would be used as the potential transition function. When the program is launched, a uniform move will randomly choose two states in $\Omega $ and propose a new frequency with the summation of the frequencies of the states that are not changed. I use the Subprunning and Regrafting move and Nearest Neighbor Interchange BIBREF23 to operate on the nodes and the leaves to build the new trees.
Experiments ::: Settings and Implementation
It is not easy to figure out which kind of skip-gram and sound-class system would generate the satisfying results, so I design my experiment as follows. (1) During the training process, I would use datasets proposed by BIBREF7, I will give the training dataset in table TABREF14, and compute skip-gram by length 4. Although there are languages repeated in both training language dataset and testing language dataset, such as Chinese in Sino-Tibet and Austronesian, I manually tick out the repeated data from both dataset and also the concept from both datasets are not overlapping each other. (2) Regarding sound-class, I would refer to SCA sound class BIBREF7 (3) Set the threshold in step 2 as 0.2 (20%). (3) The evaluation matrix is B-Cubes BIBREF24. The F-score based on the configuration above is, when I use connected components partitioning, 85.4%, and 85.2% when I use Infomap partitioning algorithm.
Experiments ::: Evaluation Methods
To evaluate the cognate detection, I choose to use B-Cubed algorithm proposed by BIBREF25. The algorithm is completely based on clusters in way of precision, recall and F1-score. The precision and recall score for each cluster are calculated separately and the final recall and precision score for the entire output are from the combination of each cluster/entity in the output. This evalution matrix has been implemented by LingPy.
To test if the phylogenetic tree generation module is working good or not, I utilize Generalized Quartet Distance (GQD) matrix, which compares the tree from historical linguistic experts and the one generated by machine. A quartet distance is used to measure how similar the two phylogenetic trees are, which is defined to count the number of quartet that differ between two trees. A quartet is defined as a set of four leaves selected from a set of leaves without replacement BIBREF26, i.e., suppose I have $n$ leaves in the tree, then I need to compare $\binom{n}{4}$ pairs of quartets in both trees. The only 4 topologies are shown in figure FIGREF16.
Given a tree $\tau $ with $n$ leaves, I are able to divide $\tau $ into sets of stars $S(\tau )$ and sets of butterflies $B(\tau )$. Now I can define the similarities between the two trees GQD as follows, where tree $\tau _{g}$ is the ground truth tree from historical linguists theoretically.
Practically, I used golden standard tree from Glottolog BIBREF28. Glottolog provides a detailed list about understudied languages, including their language families. This dataset collects 7,592 understudied spoken L1 language. Linguists began to use this dataset to build phylogenetic trees with higher accuracy, since the cognate sets are annotated manually. For example, BIBREF4 uses this as a ground truth to compare with the phylogenetic tree constructed with the cognate sets generated automatically. Their result shows their tree has higher quality if I just use annotated cognate sets, which support the motivation of the automated cognate detection of the unstudied language, helping linguists to study the less studied language more efficiently.
Experiments ::: Results and Discussion
Cognate Detection Table TABREF19 shows the statistics in test data developed by BIBREF2. The result of the BipSkip approach for cognate detection is shown in the table TABREF20 as well as the result of using SCA and CCM. As shown in the tables, we can see that the BipSkip approach is not the quickest method, although it is more user-friendly. CCM is surprsingly fastest method with slightly higher precision than BipSkip approach, especially in Austronesian and Sino-Tibetan. Indo-European languages stays almost the same, my guess is because the languages in Indo-European language family are more phonetically similar to each other than other languages in their corresponding language families, as a result of which the three methods would perform almost the same. SCA algorithm is not recommended here in that it costs the longest time and the largest spece since I need to prepare the expanded sound class and also some morphological features.
Phylogenetic Inference The result of phylogenetic inference in modified MH algorithm is shown in table TABREF22. I designed a branch of experiments, changing the settings and get some good results. I set the initial temperature as $T_{0} = \lbrace 10, 20, 30, 40, 50, 60, 70, 80, 90, 100\rbrace $. During the project, I will take down the number of iteration, the time it takes to finish running the program. Table TABREF21 is the ground truth results, testing on golden standard cognates from Glottolog. It is hard to determine which one is outperforming the other two. Overall, Indo-European costs the shortest time and fewer iterations, since in the three methods this language family always has the highest the accuracy. In addition, the cognate sets from the automatic approaches is easier to build phylogenetic trees than the manually annotated standard cognate sets, from the result, the automatic methods obviously shorten the time of automatic phylogenetic inference.
Conclusion
Obviously, the result is not surprisingly good. We can observe from the data, the accuracy for each language, some of them are only slightly over 50%. Among the five language families in the testing data, Indo-european has more accuracy than the other four language families, due to the similar phonetic features. Also, some places where native people used to live and was conquered by immigrates, for example languages in the islands in south pacific, Hawaii, Indonesia, etc, their accuracy is obviously high and easy to cluster by cognate and find their relationship. Some native languages in Australia, Pama-Nyungan language family whose languages are mainly on Autralian continent is surprisingly lower than any other southern pacific islands languages.
From this exmperiment, we can obviously use this method to help historcial linguists to make an analysis of language development and change, but the result is not very accurate basically. How do we solve the problem? The current datasets only includes the main class of phonetic alphabet, I think it is necessary to enroll some language phonetic change background knowledge to let the machine recognize the probability of change from phoneme $A$ to $B$, such as Great Vowel Shift, etc.
Appendix
In this appendix, I am going to show the part of resulting tree generated from Indo-european language testing dataset. The whole tree structure is very big which involves 34 trees totally. I am only showning the first four trees. The number on the edge is the probability that they are related. The labels at the end of the tree are language type: such as poli represents polish, norw represents Norwegian; lati represents Latin. The number behind the language type is the index of the word in the wordlist. | Yes |
052d19b456f1795acbb8463312251869cc5b38da | 052d19b456f1795acbb8463312251869cc5b38da_0 | Q: What metrics are used to evaluate results?
Text: Introduction
Bayesian inference of phylogeny has great impact on evolutionary biology. It is believed that all the species are related through a history of a common descent BIBREF0, that is to say, the reason we have various wildlife, including human beings, is because of evolution. We can show the process of evolution and solve the problems, like what is the phylogeny of life, by showing a phylogenetic tree (see Figure FIGREF1).
As a matter of fact, any research of DNA sequences or protein pattern taken from the species or an individual wildlife can start with a phylogenetic analysis. Language change, which is regarded as just like life evolution, can similarly borrow the phylogenetic analysis to discover the process in which how languages change. It is accepted that all human languages start from the proto-language which is the ancestor of modern languages. For example, Germanic languages (like English, German, Dutch), Romance languages (like Spanish, French and Portuguese), Slavic languages (like Russian, Bosnian and Polish) are from Indo-European languages, which was developed from proto-Indo-European languages. Therefore, we can borrow the computational method from biology and evolution to apply to the language topology.
Not only Natural Language Processing (NLP), more and more linguistic branch disciplines begin to have digital datasets and use computational methods to solve some problems. Historical linguistics recently have dramatically increasing digital datasets. The availability of new data and researches of different language from different language families start to challenge the traditional way to carry on the study of historical linguistics. The comparative method has been the core method for linguistic reconstruction for the past 200 years, and is based on manually identifying systematic phonetic correspondences between many words in pair of languages BIBREF2, because different culture has different writing system and different way to spell their words. That is why International Phonetic Alphabet was created to help linguists analyze different languages in parallel. However, there are too few scholars, i.e., historical linguists, to analyze the world's over 7500 type of languages BIBREF2, BIBREF3, including thousands of languages that have not been studied and are facing extinction. Thereafter, computational methods can help people do researches on unstudied languages faster and give a possible solution. Phylogenetic inference of human languages task is composed with two parts: cognate set detection and phylogenetic tree construction. Cognate set detection automatically assists people put language words with similar or possible evolutionary patterns to one cluster. The phylogenetic tree construction task build trees given the information from the clusters. In the following, I will divided the whole paper into two main steps: the way I implement cognate detection would be discussed in section 2. After that, I will use the cluster data to carry on the phylogenetic inference program and build the relationship trees, which I will describe the way that how I finished this part in section 3. I will show the results and evaluate them in section 4, and make a conclusion in the last section 5.
Cognate Detection
A great number of algorithms and mechanisms to antomatic cognate detection which could be applied in historical linguistics have been used and tested if they are working by many linguists and computer scientists BIBREF2, BIBREF4, BIBREF5, BIBREF6, BIBREF7. In detail, many of these works are very similar to each other, which consist of two main stages. For the first stage, they first extract the words with same meaning from the wordlists of different languages, either same or different language families, and compare them and use the distance calculation matrix to compute how similar they are. Regarding the second stage, a flat cluster algorithm or a network partitioning algorithm is used to partition all words into cognate sets, and also take the information in the matrix of word pairs as basis BIBREF4, BIBREF5. However, the methods in which those researchers use to compare the word pairs are totally different in that people could use different methods to pre-process their language datasets, or even use different algorithms to finish the comparison and clustering task. For example, intuitively, people will start the automated word comparison by computing the distance between the words, such as word embedding in NLP, GloVe BIBREF8, which computes the semantic similarities between the two words. In computational historical linguistics, phonetic segments are used to calculate how close the two words are instead, because the semantics of a single word is not changing as easy as phonetics is. The problem is since the program involves the data pre-processing, then the whole dataset would be traverse twice and the computation would be a big problem when the dataset is about to be over 100 languages. Consequently, people began to think about a faster method to help.
Cognate Detection ::: Consonant Class Matching Algorithm (CCM)
The first linear time method was proposed by BIBREF9, and later modified by BIBREF6. The algorithm compares the word pairs by their consonant class. A consonant class is hereby understood as a rough partitioning of speech sounds into groups that are conveniently used by historical linguists when comparing languages BIBREF4. Table TABREF3 shows some of the international phonetic alphabet (IPA) (for more details about IPA, please refer to BIBREF10). After getting the IPA of the word pairs from the wordlist, the algorithm is to determine if they are cognate by judging their first two consonants class match each other or not. However, since the algorithm only compares the first two consonant classes, the accuracy of this algorithm is relatively low. I have two reasons for this: (a) In linguistics, the number of possible sounds in human languages in the world, excluding artificial languages, amounts to the thousands BIBREF5. It is unrealistic to enroll all the sounds into the system. If we enroll all the sounds in the algorithm to simulate the language change process, we need to get a new matrix in which the probabilities of one sound switching to the other sound will be calculated, which is very time-consuming. (b) comparing the first two consonant classes are not sufficient to determine if the two words in pair are derived from the same cognate word.
Cognate Detection ::: Edit Distance
The Edit Distance approach is to take the normalized Levenshtein distance BIBREF11, which is a concept in information theory. It aims to measure the difference of two sequences by calculating the minimum number of character or string edits, such as insertion, deletion, which are coincidentally two basic language phonetic change. The distance could be used as a probability to estimate how possible one word changes to the other one.
Cognate Detection ::: Sound Class Algorithm (SCA)
This algorithm is for pairwise and multiple alignment analysis BIBREF2. It not only takes an expanded sound class into account, but it considers the prosodic aspect of each word. As a result, it is able to align within morpheme boundaries instead of the sound segments, suppose the morpheme information is the prior knowledge and we already have it.
Cognate Detection ::: LexStat
The previous three methods use the same strategy to put the words from different languages into clusters, i.e., UPGMA clustering algorithm, while LexStat uses language-specific scoring schemes which are derived from a Monte-Carlo permutation of the data BIBREF5. The word pairs from different languages are first aligned and scored, and the MC permutation shuffled the word pairs according to their scores. The scores could be calculated by the frequencies of daily use by native speakers. Thus, a distribution of sound-correspondence frequencies is generated. Then, the distribution is used to compare with an attested distribution and then converted into a language-specific scoring scheme for all word pairs.
Following the algorithms above, with the consideration of both the advantages and disadvantages of them, in this project, I am going to use a modified method: sound-class based skip-grams with bipartite networks (BipSkip). The whole procedure is quite straightforward and could be divided into three steps. First step: the word pair and their skip-grams are two sets of the bipartite networks. The second step is optional, which is to refine the bipartite network. Before I run the program, I will be asked to input a threshold, which determines if the program should delete the skip-gram nodes linked to fewer word nodes than the threshold itself. According to the experiment, even though I did not input any threshold as one of the parameters, the algorithm could still give the same answer but with more executing time. In the last step, the final generated bipartite graph would be connected to a monopartite graph and partitioned into cognate sets with the help of graph partitioning algorithms. Here I would use Informap algorithm BIBREF12. To make a comparison to this method, I am using CCM and SCA for distance measurement in this experiment, too. UPGMA algorithm would be used accordingly in these two cases.
Bayesian Phylogenetic Inference
Methods for Bayesian phylogenetic inference in evolutionary biology and historical linguistics are all based on the following Bayes rule BIBREF4, BIBREF13:
or
Where $f$ means the probability density function, $\Lambda $ consists of the tree topology $\tau $, the branch length vector of the tree $T$ and the substitution model parameter $\theta $; $X$ is a data matrix with dimension $N*K$, within which there are $N$ rows indicating $N$ different words from different kinds of languages and they can be put into $K$ cluster in a language family. Figure FIGREF8 shows an example of the matrix. As we can see, the data matrix is a binary matrix with every element $i_{ij}$. 1 means language $i$ could be classified into cognate $j$, while 0 means language $i$ does not belong to cluster $j$. Based on the shape of the data matrix, to get the parameter ($\Lambda = \tau , X, \theta $) for tree generation, we need to sum over the whole matrix and will have to compute all possible topologies of $\frac{(2N-3)!}{2^{N-2}(N-2)!}$.
This method is practical only for a small amount of dataset and the calculation is not easy once it is over 5 languages BIBREF13. In addition, based on the Bayesian formula, if we use prior probability $Pr(tree)$ and likelihood $Pr(Data|Tree)$ to produce the posterior probability, it seems that the posterior probability is very easy and convenient to formulate based on the Bayesian formula, but to compute this probability and get the greatest estimate of the final tree, the machine has to compute and compare all the possible trees and in each tree, the machine will compute all the combination of branches with different length.
Metropolis-Hastings (MH) algorithm BIBREF14 , one of the Markov Chain Monte Carlo techniques, would be a good tool to overcome this computation difficulty by avoiding summing over all of the topologies by evaluating the posterior probability $f(\Lambda |X)$. The likelihood in this case from one data to the next parameter is calculated by the prunning algorithm. Prunning algorithm, or we can call it K81 model BIBREF15, is a Markov model of DNA evolution, and this model is a description of the DNA in evolution as a string of four discrete state, i.e., G, C, A, T. Fortunately, language model is very similar to DNA model in that both of them are discrete models , in the language model, we only apply this model in the binary dataset.
Bayesian Phylogenetic Inference ::: Markov Chain Monte Carlo (MCMC)
MCMC method is helpful here since it generates a probability distribution $\pi =\lbrace \pi _{i}\rbrace , i=0,1,...,$. We construct a Markov chain with the stationary distribution being posterior probability distribution of the parameters. A new tree will be proposed by stochastically perturbing the current tree and the new tree would be accepted or rejected. If the new tree is accepted, then repeated the algorithm, which is subjected to the future perturbation. If this Markov chain is constructed and configured properly, the proportion of the time that any of the trees is visited is a valid approximation of the posterior probability of the tree BIBREF16. Besides, $\pi _{i}$ is sometimes still not easy to calculate, but we can get the function $\pi = \frac{\pi _{j}}{\pi _{i}}$ directly instead. MH algorithm is the method that could help us solve this function. This algorithm tells us that given sample states $i,j\in \Omega $, and every state $i\in \Omega $ has a distribution $\pi =\lbrace \pi _{i}\rbrace $. There is a potential transition from state $i$ to $j$ and the transition probability is $g_{ij}$, while the acceptance transition probability, meaning a move from state $i$ to $j$, is $\alpha _{ij}$. We can easily get the properties as shown below:
Therefore, the chain $\textbf {P}=\lbrace p_{ij}\rbrace $ has $\pi $ as its stationary distribution and it satisfies
A sufficient solution for this chain to generate $\pi $ as its stationary distribution is that $\lbrace g_{ij}\rbrace $ be irreducible and aperiodic BIBREF13, BIBREF17. MH algorithm aims to sample parameters from the posterior distribution, which could help us generate the posterior distribution of phylogenetic trees and each state of this Markov Chain is probably labelled as cognate set. We can get the simple relationship as $\pi _{i}=f(\tau =i|X)$. To calculate $\pi = \frac{\pi _{j}}{\pi _{i}}$, we have ratio as shown below:
We can have $\alpha _{ij}$
We put $\alpha _{ij}$ the method and have algorithm 1. Under this method, the Markov chain can finally work and the time is shortened greatly, but due to the large scale of the dataset, the new problem is the chain has a large probability that it can get stuck in a local maxima if the posterior probability density function has multiple peaks.
To solve this problem, BIBREF18 proposed Metropolis-coupled Markov Chain Monte-Carlo methods (MC3) aiming to solve the problem when Markov chain has to move across a distribution with multiple peaks, which requires $n$ chains run simultaneously. In the iterations, we can get $n$ different stationary distributions $\pi _{n}$, but only the first one $\pi _{1}$ would be the target density and the rest of them would be treated for improving mixing. To 'heat' some of these chains, the posterior probability of the corresponding chains are raised to the power of $\beta $. For instance, if the probability density function is $f(\Lambda |X)$, then the heated distribution is $f(\Lambda |X)^{\beta }$, where $\beta \in (0,1)$. Once we get a heated version of Markov chain, this technique actually make new states easily get accepted. In comparison, we can have the heated version of the chain:
In the ratio, if $f(\Lambda |X) > f(\Lambda ^{\prime }|x)$, this will trigger the increase of each state to the power of $\beta $. As a result, the heated chain is more likely to accept new states than other cold chains. The MC3 algorithm needs very expensive computation, since it has to work with multiple CPU cores to run at the same time.
Based on the information above, to avoid both shortcomings with cheaper computation, I am going to use MH algorithm with simulated annealing BIBREF19. This method is one of the modified version of MH algorithm with change of the acceptance ratio, similar to MC3 but only with single CPU. We are going to use the equation, where $T_{i}$ represents temperature.
In this method, I will have an initial temperature $T_{0}$, which was set very high at the beginning, indicating heated Markov chain and could be decreased in the following steps until it is set to be $T_{i}\rightarrow 0$. Regarding the change of $T_{i}$ value, i.e., to cool down the Markov chain, a cool-down algorithm is inserted in the method. According to BIBREF19, a linear cool-down algorithm in which the $T_{0}$ initial value was extremely high at the very beginning and the algorithm decreases it in every iteration in $T_{i} = \lambda T_{i-1}$ with $\lambda $ being set between $[0.85,0.96]$ BIBREF20. In the experiment, I reduce the temperature to cool down the chain until $T_{i} = 10^{-5}$. The final state of this whole Markov chain is regarded to be the maximuma-posterior (MAP) estimate of the inference. The Bayesian analysis in this project uses binary datasets with all states 0 and 1. The Generalized Time Reversible Model (GTR model) BIBREF21, BIBREF22 is employed to compute the transition probabilities between states. When I build the phylogenetic trees, the tree branch length and also the shape of the tree are continuous variables, a move satisfying exponential distribution would be used as the potential transition function. When the program is launched, a uniform move will randomly choose two states in $\Omega $ and propose a new frequency with the summation of the frequencies of the states that are not changed. I use the Subprunning and Regrafting move and Nearest Neighbor Interchange BIBREF23 to operate on the nodes and the leaves to build the new trees.
Experiments ::: Settings and Implementation
It is not easy to figure out which kind of skip-gram and sound-class system would generate the satisfying results, so I design my experiment as follows. (1) During the training process, I would use datasets proposed by BIBREF7, I will give the training dataset in table TABREF14, and compute skip-gram by length 4. Although there are languages repeated in both training language dataset and testing language dataset, such as Chinese in Sino-Tibet and Austronesian, I manually tick out the repeated data from both dataset and also the concept from both datasets are not overlapping each other. (2) Regarding sound-class, I would refer to SCA sound class BIBREF7 (3) Set the threshold in step 2 as 0.2 (20%). (3) The evaluation matrix is B-Cubes BIBREF24. The F-score based on the configuration above is, when I use connected components partitioning, 85.4%, and 85.2% when I use Infomap partitioning algorithm.
Experiments ::: Evaluation Methods
To evaluate the cognate detection, I choose to use B-Cubed algorithm proposed by BIBREF25. The algorithm is completely based on clusters in way of precision, recall and F1-score. The precision and recall score for each cluster are calculated separately and the final recall and precision score for the entire output are from the combination of each cluster/entity in the output. This evalution matrix has been implemented by LingPy.
To test if the phylogenetic tree generation module is working good or not, I utilize Generalized Quartet Distance (GQD) matrix, which compares the tree from historical linguistic experts and the one generated by machine. A quartet distance is used to measure how similar the two phylogenetic trees are, which is defined to count the number of quartet that differ between two trees. A quartet is defined as a set of four leaves selected from a set of leaves without replacement BIBREF26, i.e., suppose I have $n$ leaves in the tree, then I need to compare $\binom{n}{4}$ pairs of quartets in both trees. The only 4 topologies are shown in figure FIGREF16.
Given a tree $\tau $ with $n$ leaves, I are able to divide $\tau $ into sets of stars $S(\tau )$ and sets of butterflies $B(\tau )$. Now I can define the similarities between the two trees GQD as follows, where tree $\tau _{g}$ is the ground truth tree from historical linguists theoretically.
Practically, I used golden standard tree from Glottolog BIBREF28. Glottolog provides a detailed list about understudied languages, including their language families. This dataset collects 7,592 understudied spoken L1 language. Linguists began to use this dataset to build phylogenetic trees with higher accuracy, since the cognate sets are annotated manually. For example, BIBREF4 uses this as a ground truth to compare with the phylogenetic tree constructed with the cognate sets generated automatically. Their result shows their tree has higher quality if I just use annotated cognate sets, which support the motivation of the automated cognate detection of the unstudied language, helping linguists to study the less studied language more efficiently.
Experiments ::: Results and Discussion
Cognate Detection Table TABREF19 shows the statistics in test data developed by BIBREF2. The result of the BipSkip approach for cognate detection is shown in the table TABREF20 as well as the result of using SCA and CCM. As shown in the tables, we can see that the BipSkip approach is not the quickest method, although it is more user-friendly. CCM is surprsingly fastest method with slightly higher precision than BipSkip approach, especially in Austronesian and Sino-Tibetan. Indo-European languages stays almost the same, my guess is because the languages in Indo-European language family are more phonetically similar to each other than other languages in their corresponding language families, as a result of which the three methods would perform almost the same. SCA algorithm is not recommended here in that it costs the longest time and the largest spece since I need to prepare the expanded sound class and also some morphological features.
Phylogenetic Inference The result of phylogenetic inference in modified MH algorithm is shown in table TABREF22. I designed a branch of experiments, changing the settings and get some good results. I set the initial temperature as $T_{0} = \lbrace 10, 20, 30, 40, 50, 60, 70, 80, 90, 100\rbrace $. During the project, I will take down the number of iteration, the time it takes to finish running the program. Table TABREF21 is the ground truth results, testing on golden standard cognates from Glottolog. It is hard to determine which one is outperforming the other two. Overall, Indo-European costs the shortest time and fewer iterations, since in the three methods this language family always has the highest the accuracy. In addition, the cognate sets from the automatic approaches is easier to build phylogenetic trees than the manually annotated standard cognate sets, from the result, the automatic methods obviously shorten the time of automatic phylogenetic inference.
Conclusion
Obviously, the result is not surprisingly good. We can observe from the data, the accuracy for each language, some of them are only slightly over 50%. Among the five language families in the testing data, Indo-european has more accuracy than the other four language families, due to the similar phonetic features. Also, some places where native people used to live and was conquered by immigrates, for example languages in the islands in south pacific, Hawaii, Indonesia, etc, their accuracy is obviously high and easy to cluster by cognate and find their relationship. Some native languages in Australia, Pama-Nyungan language family whose languages are mainly on Autralian continent is surprisingly lower than any other southern pacific islands languages.
From this exmperiment, we can obviously use this method to help historcial linguists to make an analysis of language development and change, but the result is not very accurate basically. How do we solve the problem? The current datasets only includes the main class of phonetic alphabet, I think it is necessary to enroll some language phonetic change background knowledge to let the machine recognize the probability of change from phoneme $A$ to $B$, such as Great Vowel Shift, etc.
Appendix
In this appendix, I am going to show the part of resulting tree generated from Indo-european language testing dataset. The whole tree structure is very big which involves 34 trees totally. I am only showning the first four trees. The number on the edge is the probability that they are related. The labels at the end of the tree are language type: such as poli represents polish, norw represents Norwegian; lati represents Latin. The number behind the language type is the index of the word in the wordlist. | Unanswerable |
7b89515d731d04dd5cbfe9c2ace2eb905c119cbc | 7b89515d731d04dd5cbfe9c2ace2eb905c119cbc_0 | Q: Which is the baseline model?
Text: Introduction
Language identification (LID) lends itself to a wide range of applications, such as mixed-lingual (code-switching) speech recognition. Humans use many cues to discriminate languages, and better accuracy can be achieved with the use of more cues. Various LID approaches have been developed, based on different types of cues.
Cues for language identification
There are more than 5000 languages in the world, and each language has distinct properties at different levels, from acoustic to semantics BIBREF0 , BIBREF1 , BIBREF2 . A number of studies have investigated how humans use these properties as cues to distinguish between languages BIBREF3 . For example, Muthusamy BIBREF4 found that familiarity with a language is an important factor affecting LID accuracy, and that longer speech samples are easier to identify. Moreover, people can easily tell what cues they use for identification, including phonemic inventory, word usage, and prosody. More thorough investigations were conducted by others by modifying speech samples to promote one or several factors. For example, Mori et al. BIBREF5 found that people are able to identify Japanese and English fairly reliably even when phone information is reduced. They argued that other non-linguistic cues such as intensity and pitch were used to decide the language. Navratil BIBREF6 evaluated the importance of various types of knowledge, including lexical, phonotactic and prosodic, by asking humans to identify five languages, Chinese, English, French, German and Japanese. Subjects were presented with unaltered speech samples, samples with randomly altered syllables, and samples with the vocal-tract information removed to leave only the F0 and amplitude. Navratil found that the speech samples with random syllables are more difficult to identify compared to the original samples (73.9% vs 96%), and removing vocal-tract information leads to significant performance reduction (73.9% vs 49.4%). This means that with this 5-language LID task, the lexical and phonotactic information is important for human decision making.
The LID experiments summarised above suggest that languages can be discriminated by multiple cues at different levels, and the cues used to differentiate different language pairs are different. In general, the cues can be categorized into three levels: feature level, token level and prosody level. At the feature level, different languages have their own implementation of phones, and the transitions between phones are also different. This acoustic speciality is a short-time property and can be identified by certain spectral analysis and feature extraction of our auditory system. At the token level, the distribution and transition patterns of linguistic tokens at various levels are significantly different. The tokens can be phones/phonemes, syllables, words or even syntactic or semantic tags. At the prosody level, the duration, pitch and stress patterns often differ between languages. For example, patterns of stress can provide an important cue for discriminating between two stressed languages, duration can also be potentially useful, and the tone patterns of syllables or words offer a clear cue to discriminate between tonal languages.
LID approaches
Based on the different types of cues, multiple LID approaches have been proposed. Early work generally focused on feature-level cues. Feature-based methods use strong statistical models built on raw acoustic features to make the LID decision. For instance, Cimarusti used LPC features BIBREF7 , and Foil et al. BIBREF8 investigated formant features. Dynamic features that involve temporal information were also demonstrated to be effective BIBREF9 . The statistical models used include Gaussian mixture models (GMMs) BIBREF10 , BIBREF11 , hidden Markov models (HMMs) BIBREF12 , BIBREF13 , neural networks (NNs) BIBREF14 , BIBREF15 , and support vector machines (SVMs) BIBREF16 . More recently, a low-rank GMM model known as the i-vector model was proposed and achieved significant success BIBREF17 , BIBREF18 . This model constrains the mean vectors of the GMM components in a low-dimensional space to improve the statistical strength for model training, and uses a task-oriented discriminative model (e.g., linear discriminative analysis, LDA) to improve the decision quality at run-time, leading to improved LID performance. Due to the short-time property of the features, most feature-based methods model the distributional characters rather than the temporal characters of speech signals.
The token-based approach is based on the characters of high-level tokens. Since the dynamic properties of adjacent tokens are more stable than adjacent raw features, temporal characters can be learned with the token-based approach, in additional to the distributional characters. A typical approach is to convert speech signals into phone sequences, and then build an n-gram language model (LM) for each target language to evaluate the confidence that the input speech matches that language. This is the famous phone recognition and language modelling (PRLM) approach. Multiple PRLM variants have been proposed, such as parallel phone recognition followed by LM (PPRLM) BIBREF19 , BIBREF20 , and phone recognition on a multilingual phone set BIBREF21 . Other tokens such as syllables BIBREF22 and words BIBREF23 , BIBREF24 have also been investigated.
The prosody-based approach utilizes patterns of duration, pitch, and stress to discriminate between languages. For example, Foil et al. BIBREF8 studied formant and prosodic features and found formant features to be more discriminative. Rouas et al. BIBREF25 modeled pure prosodic features by GMMs and found that their system worked well on read speech, but could not deal with the complexity of spontaneous speech prosody. Muthusamy BIBREF15 used pitch variation, duration and syllable rate. Duration and pitch patterns were also used by Hazen BIBREF21 . In most cases, the prosodic information is used as additional knowledge to improve feature or token-based LID.
Most of the above methods, no matter what information is used, heavily rely on probabilistic models to accumulate evidence from a long speech segment. For example, the PRLM method requires an n-gram probability of the phonetic sequence, and the GMM/i-vector method requires the distribution of the acoustic feature. Therefore, these approaches require long test utterances, leading to inevitable latency in the LID decision. This latency is a serious problem for many practical applications, e.g., code-switching ASR, where multiple languages may be contained within a single block of speech. For quick LID, frame-level decision is highly desirable, which therefore cannot rely on probabilistic models.
The recently emerging deep learning approach solves this problem by using various deep neural networks (DNNs) to produce frame-level LID decisions. An early successful deep neural model was developed by Lopez-Moreno et al. BIBREF26 , who proposed an approach based on a feed-forward deep neural network (FFDNN), which accepts raw acoustic features and produces frame-level LID decisions. The score for utterance-based decision is calculated by averaging the scores of the frame-level decisions. This was extended by others with the use of various neural model structures, e.g., CNN BIBREF27 , BIBREF28 and TDNN BIBREF29 , BIBREF30 . These DNN models are feature-based, but they consider a large context window, and can therefore learn the feature's temporal information, which is not possible with conventional feature-based models (such as the i-vector model), that only learn distributional information. The temporal information can be better learned by recurrent neural networks (RNNs), as proposed by Gonzalez-Dominguez et al. BIBREF31 . Using an RNN structure based on the long-short term memory unit (LSTM), the authors reported better performance with fewer parameters. This RNN approach was further developed by others, e.g., BIBREF32 , BIBREF33 .
It should be noted that DNNs have been used in other ways in LID. For example, Song et al. BIBREF34 used a DNN to extract phonetic feature for the i-vector system, and Ferrer et al. BIBREF35 proposed a DNN i-vector approach that uses posteriors produced by a phone-discriminative FFDNN to compute the Baum-Welch statistics. Tian et al. BIBREF36 extended this by using an RNN to produce the posteriors. These methods all use neural models as part of the system, but their basic framework is still probabilistic, so they share the same problem of decision latency. In this paper, we focus on the pure neural approach that uses neural models as the basic framework, so that short-time language information can be learned by frame-level discriminative training.
Motivation of the paper
All the present neural LID methods are based on acoustic features, e.g., Mel filter banks (Fbanks) or Mel frequency cepstral coefficients (MFCCs), with phonetic information largely overlooked. This may have significantly hindered the performance of neural LID. Intuitively, it is a long-standing hypothesis that languages can be discriminated between by phonetic properties, either distributional or temporal; additionally, phonetic features represent information at a higher level than acoustic features, and so are more invariant with respect to noise and channels. Pragmatically, it has been demonstrated that phonetic information, either in the form of phone sequences, phone posteriors, or phonetic bottleneck features, can significantly improve LID accuracy in both the conventional PRLM approach BIBREF11 and the more modern i-vector system BIBREF34 , BIBREF35 , BIBREF36 . In this paper, we will investigate the utilization of phonetic information to improve neural LID. The basic concept is to use a phone-discriminative model to produce frame-level phonetic features, and then use these features to enhance RNN LID systems that were originally built with raw acoustic features. The initial step is therefore feature combination, with the phonetic feature used as auxiliary information to assist acoustic RNN LID. This is improved further, as additional research identified that a simpler model using only the phonetic feature as the RNN LID input provides even better performance. We call this RNN model based on phonetic features the phonetic temporal neural LID approach, or PTN LID. As well as having a simplified model structure, the PTN offers deeper insight into the LID task by rediscovering the value of the phonetic temporal property in language discrimination. This property was historically widely and successfully applied in token-based approaches, e.g., PRLM BIBREF11 , but has been largely overlooked due to the popularity of the i-vector approach.
Table 1 summarizes different systems that use deep neural models in LID. The probabilistic approach uses DNNs as part of a probabilistic system, e.g., GMM or i-vector, while the neural approach uses various types of DNNs as the decision architecture. Both approaches may use either acoustic features or phonetic features. The proposed PTN approach is at the bottom-right of the table.
Paper organization
The remainder of the paper is organized as follows: the model structures of the PTN approach will be presented in Section "Phonetic neural modelling for LID" , which is followed by the implementation details in Section "Model structure" . The experiments and results are reported in Section "Experiments" , and some conclusions and future work will be presented in Section "Conclusions" .
Phonetic neural modelling for LID
In this section, we present the models that employ phonetic information for RNN LID. Although the phonetically aware approach treats phonetic information as auxiliary knowledge, the PTN approach uses phonetic information as the only input into the RNN LID system. Both are depicted in Fig. 1 .
Phonetically aware acoustic neural model
The instinctive idea for utilizing phonetic information in the RNN LID system is to treat it as auxiliary knowledge, which we call a phonetically aware approach. Intuitively, this can be regarded as a knowledge-fusion method that uses both the phonetic and acoustic features to learn LID models. Fig. 1 (a) shows this model. A phonetic DNN model (this may be in any structure, such as FFDNN, RNN, TDNN) is used to produce frame-level phonetic features. These can be read from anywhere in the phonetic DNN, such as the output, or the last hidden layer, and then be propagated to the LID model, an LSTM-RNN in our study. This propagated phonetic information can be accepted by the LID model in different ways. For example, it can be part of the input, or as an additional term of the gate or non-linear activation functions.
Phonetic temporal neural model
The second model, which we call the PTN model, completely replaces the acoustic feature with the phonetic feature, and thus entirely relies on the properties of the phonetic representation. This learning is based on the RNN model, therefore the temporal patterns of the phonetic features can be learned. This PTN system is shown in Fig. 1 (b). Although the PTN model is a special, `aggressive' case of the phonetically aware approach, the success of this model offers a deeper insight into the LID task as it rediscovers the importance of the temporal properties of phonetic representations.
Understanding the PTN approach
The rationality of the PTN approach can be understood from two perspectives: the phonetic perspective, which relates to what information is important, and the transfer learning perspective, which relates to how this information is learned.
Phonetic perspective: The PTN approach adopts the long-standing hypothesis (as used by the PRLM model) that languages should be discriminated by phonetic rather than spectral properties. However this has been largely overlooked since the success of the i-vector approach, which achieved good performance using only raw acoustic features. However, Song et al. BIBREF34 recently rediscovered the value of phonetic features in the i-vector model. The PTN approach proposed here follows the same idea and rediscovers the value of phonetic features in the neural model. We argue that this value is more important for the neural model than for the probabilistic model (e.g., i-vector), as its decision is based on only a small number of frames, and thus requires that the feature involves more language-related information and less noise and uncertainties. The i-vector model, in contrast, can utilize more speech signals, hence can discover language-related information from the distributional patterns even with raw acoustic features.
Both the PTN approach and the historical token-based approach share the same idea of utilizing phonetic information and modelling the temporal patterns, but they are fundamentally different. Firstly, the phonetic information in the PTN approach is frame-level, while in conventional token-based methods this information is unit-level. Therefore, the PTN approach can represent phonetic properties at a higher temporal resolution. Secondly, conventional token-based methods represent phonetic information as sequences derived from phone recognition, while the PTN approach represents phonetic information as a feature vector that involves information contributed by all phones, and thus more detailed phonetic information is represented. Finally, the back-end model of the conventional token-based approach is an n-gram LM based on discrete tokens and trained with the maximum likelihood (ML) criterion, while the back-end model of the PTN approach is an RNN, which functions similarly to an RNN LM, but is based on continuous phonetic features, and trained with a task-oriented criterion that discriminates the target languages.
Transfer learning perspective: The second perspective to understand the PTN approach is from the transfer learning perspective BIBREF37 . It is well known that DNNs perform very well at learning task-oriented features from raw data. This is the hypothesis behind conventional acoustic RNN LID methods: if the neural model is successfully trained, it can learn any useful information from the raw acoustic features layer by layer, including the phonetic information. It therefore initially seems unnecessary to design our PTN phonetic feature learning and modelling architecture. However, we argue that using the language labels alone to learn LID-related information from raw acoustic features is highly ineffective, because these labels are too coarse to provide sufficient supervision. With the PTN model, feature extraction is trained on speech data labelled with phones or words which are highly informative and fine-grained (compared to language labels), leading to a strong DNN model for phonetic feature extraction. Importantly, phone discrimination and language identification are naturally correlated (from our phonetic perspective), which means that the phonetic features learned with the strong phone/word supervision involves rich information suitable for LID. This is an example of transfer learning, where a related task (i.e., phone discrimination) is used to learn features for another task (LID).
The PTN approach also involves another two transfer learning schemes: cross language and cross condition (databases). This means that the phonetic DNN can be learned with any speech data in any language. This property was identified in token-based LID BIBREF19 , however it is more important for the phonetic neural models, as training the phonetic DNN requires a large amount of speech data which is often not available for the target languages and the operating conditions under test. Moreover, it is also possible to train the phonetic DNN with multilingual, multi-conditional data BIBREF38 , resulting in robust and reliable phonetic feature extraction.
In summary, the PTN approach utilizes a detailed phonetic representation (DNN phonetic feature), and a powerful temporal model (LSTM-RNN) to capture the phonetic temporal properties of a language with a high temporal resolution. It also utilizes three types of transfer learning to ensure that the phonetic feature is representative and robust. Our PTN approach is therefore very powerful and flexible, and reconfirms the belief of many LID researchers that phonetic temporal information is highly valuable in language discrimination, not only for humans but also for machines.
Model structure
This section presents the details of the phonetic neural LID models, including both the phonetically aware model and the PTN model. The phonetic DNN can be implemented in various DNN structures, and here we choose the TDNN BIBREF39 which can learn long-term phonetic patterns and performed well in our experiments.
For the LID neural model, we choose the LSTM-RNN. One reason for this choice is that LSTM-RNN has been demonstrated to perform well in both the pure neural LID approach BIBREF31 and the neural-probabilistic hybrid LID approach BIBREF36 . Another reason is that the RNN model can learn the temporal properties of speech signals, which is in accordance with our motivation to model the phonetic dynamics, as in the conventional PRLM approach BIBREF20 . We first describe the LSTM-RNN structure used for LID, and then present the model structures of the phonetically aware acoustic RNN model and PTN model.
LSTM-RNN LID
The LSTM-RNN model used in this study is a one-layer RNN model, where the hidden units are LSTM. The structure proposed by Sak et al. BIBREF40 is used, as shown in Fig. 2 .
The associated computation is given as follows:
$$i_t &=& \sigma (W_{ix}x_{t} + W_{ir}r_{t-1} + W_{ic}c_{t-1} + b_i) \nonumber \\ f_t &=& \sigma (W_{fx}x_{t} + W_{fr}r_{t-1} + W_{fc}c_{t-1} + b_f) \nonumber \\ c_t &=& f_t \odot c_{t-1} + i_t \odot g(W_{cx}x_t + W_{cr}r_{t-1} + b_c) \nonumber \\ o_t &=& \sigma (W_{ox}x_t + W_{or}r_{t-1} + W_{oc}c_t + b_o) \nonumber \\ m_t &=& o_t \odot h(c_t) \nonumber \\ r_t &=& W_{rm} m_t \nonumber \\ p_t &=& W_{pm} m_t \nonumber \\ y_t &=& W_{yr}r_t + W_{yp}p_t + b_y \nonumber $$ (Eq. 13)
In the above equations, the $W$ terms denote weight matrices, and those associated with the cells were constrained to be diagonal in our implementation. The $b$ terms denote bias vectors. $x_t$ and $y_t$ are the input and output symbols respectively; $i_t$ , $f_t$ , $o_t$ represent the input, forget and output gates, respectively; $c_t$ is the cell and $m_t$ is the cell output. $r_t$ and $b$0 are two output components derived from $b$1 , where $b$2 is recurrent and fed to the next time step, while $b$3 is not recurrent and contributes to the present output only. $b$4 is the logistic sigmoid function, and $b$5 and $b$6 are non-linear activation functions, chosen to be hyperbolic. $b$7 denotes element-wise multiplication.
In this study, the LSTM layer consists of $1,024$ cells, and the dimensionality of both the recurrent and non-recurrent projections is set to 256. The natural stochastic gradient descent (NSGD) algorithm BIBREF41 was employed to train the model. During the training and decoding, the cells were reset for each 20 frames to ensure only short-time patterns are learned.
Phonetically aware neural LID
In the phonetically aware model, the phonetic feature is read from the phonetic DNN and is propagated to the LID RNN as additional information to assist the acoustic neural LID. The phonetic feature can be read either from the output (phone posterior) or the last hidden layer (logits), and can be propagated to different components of the RNN LID model, e.g., the input/forget/output gates and/or the non-linear activation functions.
Fig. 3 (a) illustrates a simple configuration, where the phonetic DNN is a TDNN model, and the feature is read from the last hidden layer. The phonetic feature is propagated to the non-linear function $g(\cdot )$ . With this configuration, calculation of the LID RNN is similar, except that the cell value should be updated as follows: $ c_t = f_t \odot c_{t-1} + i_t \odot g(W_{cx}x_t + W_{cr}r_{t-1} + \underline{W^{\prime }_{c\phi }\phi _{t}} + b_c) $
where $\phi _t$ is the phonetic feature obtained from the phonetic DNN.
Phonetic temporal neural (PTN) LID
The phonetically aware acoustic RNN model is an acoustic-based approach, with the phonetic feature used as auxiliary information. In contrast, the PTN approach assumes that the phonetic temporal properties cover most of the information for language discrimination, so the acoustic feature is not important any more. Therefore, it removes all acoustic features and uses the phonetic features as the only input of the LID RNN, as shown in Fig. 3 (b).
It is interesting to compare the PTN approach with other LID approaches. Firstly, it can be regarded as a new version of the conventional PRLM approach, particularly the recent PRLM implementation using RNN as the LM BIBREF42 . The major difference is that the PTN approach uses frame-level phonetic features while the PRLM approach uses token-level phonetic sequences; in addition, the phonetic information in the PTN approach is much richer than for PRLM, as it is represented as a continuous phonetic vector rather than discrete phonetic symbols.
The PTN approach is also correlated to the neural-probabilistic hybrid approach, where the phonetic DNN is used to produce phonetic features, from which the GMM or i-vector model is constructed. The PTN approach uses the same phonetic features, but employs an RNN model to describe the dynamic property of the feature, instead of modelling the distributional property using GMM or i-vector models. As will be discussed in the next section, temporal modelling is very important for phonetic neural models.
Finally, compared to the conventional acoustic RNN LID model, the PTN model uses phonetic features rather than acoustic features. Since the phonetic features can be learned with a very large speech database, they are much more robust against noise and uncertainties (e.g., speaker traits and channel distortions) than the raw acoustic features. This suggests that the PTN approach is more robust against noise than the conventional acoustic RNN approach.
Databases and configurations
The experiments were conducted on two databases: the Babel database and the AP16-OLR database. The Babel database was collected as part of the IARPA (Intelligence Advanced Research Projects Activity) Babel program, which aimed to develop speech technologies for low-resource languages. The sampling rate is 8 kHz and the sample size is 16 bits. In this paper, we chose speech data from seven languages in the Babel database: Assamese, Bengali, Cantonese, Georgian, Pashto Tagalog and Turkish. For each language, an official training and development dataset were provided. The training datasets contain both conversational and scripted speech, and the development datasets only contain conversational speech. We used the entire training set of each language for model training, but randomly selected $2,000$ utterances from the development set of each language to perform testing.
The training data sets from the seven languages are as follows: Assamese 75 hours, Bengali 87 hours, Cantonese 175 hours, Georgian 64 hours, Pashto 111 hours, Tagalog 116 hours and Turkish 107 hours. The average duration of the test utterances is $4.15$ seconds, ranging from $0.19$ seconds to $30.85$ seconds.
The AP16-OL7 database was originally created by Speechocean Inc., targeted towards various speech processing tasks (mainly speech recognition), and was used as the official data for the AP16-OLR LID challenge. The database contains seven datasets, each in a particular language. These are: Mandarin, Cantonese, Indonesian, Japanese, Russian, Korean and Vietnamese. The data volume for each language is approximately 10 hours of speech signals recorded by 24 speakers (12 males and 12 females), with each speaker recording approximately 300 utterances in reading style by mobile phones, with a sampling rate of 16kHz and a sample size of 16 bits. Each dataset was split into a training set consisting of 18 speakers, and a test set consisting of 6 speakers. For Mandarin, Cantonese, Vietnamese and Indonesian, the recording was conducted in a quiet environment. For Russian, Korean and Japanese, there are 2 recording conditions for each speaker, quiet and noisy. The average duration (including silence) of all the $12,939$ test utterances of the seven languages is $4.74$ seconds, ranging from $1.08$ seconds to $18.06$ seconds.
The phonetic DNN is a TDNN structure, and the LID model is based on the LSTM-RNN. The raw feature used for those models consists of 23-dimensional Fbanks, with a symmetric 2-frame window for RNN and a symmetric 4-frame window for TDNN to splice neighboring frames. All the experiments were conducted with Kaldi BIBREF43 . The default configurations of the Kaldi WSJ s5 nnet3 recipe were used to train the phonetic DNN and the LID RNN. We first report experiments based on the Babel database, and then experiments with the AP16-OLR database.
Babel: baseline of bilingual LID
As the first step, we build three baseline LID systems, one based on the i-vector model, and the other two based on LSTM-RNN, using the speech data of two languages from Babel: Assamese and Georgian (AG).
For the i-vector baseline, the UBM involves $2,048$ Gaussian components and the dimensionality of the i-vectors is 400. The static acoustic features consists of 12-dimensional MFCCs and the log energy. These static features are augmented by their first and second order derivatives, resulting in 39-dimensional feature vectors. In our experiment, we train an SVM for each language to determine the score of a test i-vector belonging to that language. The SVMs are trained on the i-vectors of all training segments, following the one-versus-rest strategy.
The two RNN LID baselines are: a standard RNN LID system (AG-RNN-LID) that discriminates between the two languages in its output, and a multi-task system (AG-RNN-MLT) that was trained to discriminate between the two languages as well as the phones. More precisely, the output units of the AG-RNN-MLT are separated into two groups: an LID group that involves two units corresponding to Assamese and Georgian respectively, and an ASR group that involves $3,349$ bilingual senones that are inherited from an HMM/GMM ASR system trained with the speech data of Assamese and Georgian, following the standard WSJ s5 HMM/GMM recipe of Kaldi. The WSJ s5 nnet3 recipe of Kaldi is then used to train the AG-RNN-LID and AG-RNN-MLT systems.
The LID task can be conducted by either AG-RNN-LID or AG-RNN-MLT (using the LID output group) at the frame-level (denoted as `Fr.'), using the frame-level language posteriors they produce. To evaluate the utterance-level (denoted as `Utt.') performance, the frame-level posteriors are averaged to form the utterance-level posterior, by which the language decision can be made.
The performance results with the three baseline systems, in terms of $C_{avg}$ and equal error rate (EER), are shown in Table 2 . The results indicate that both the LID RNN and the multi-task LID RNN are capable of language discrimination, and the multi-task RNN significantly outperforms both the LID RNN and the i-vector baseline. This indicates that the phone information is very useful for neural LID, even if simply used as an auxiliary objective in the model training, hence supporting our transfer learning perspective, as described in Section "Phonetic neural modelling for LID" .
The multi-task learning approach is an interesting way to involve phonetic information in LID. However, it has the limitation of requiring the training data to be labelled in both languages and words/phones. This is very costly and not feasible in most scenarios. The phonetic neural models (the phonetically aware model and the PTN model) do not suffer from this problem.
Babel: phonetically aware bilingual LID
The phonetically aware architecture uses phonetic features as auxiliary information to improve the RNN LID. We experimented with various architectures for the phonetic DNN, and found that the TDNN structure is a good choice. In this experiment, the TDNN structure is composed of 6 time-delay layers, with each followed by a p-norm layer that reduces the dimensionality of the activation from $2,048$ to 256, the same dimension as the recurrent layer of the LID LSTM-RNN. The activations of the last hidden layer in the TDNN are read out as the phonetic feature.
Two TDNN models are trained. The AG-TDNN-MLT model is a multi-task model trained with the Assamese and Georgian data, and there are two groups of output targets, phone labels and language labels. The ASR performance (WER) of the AG-TDNN-MLT model is $66.4\%$ and $64.2\%$ for Assamese and Georgian respectively. The SWB-TDNN-ASR model is an ASR model trained with the Switchboard database. This database involves 317 hours of telephone speech signals in English, recorded from $4,870$ speakers. The ASR performance (WER) of SWB-TDNN-ASR is $20.8\%$ on the Eval2000 dataset.
Another design decision that had to be made was to choose which component in the LID RNN will receive the phonetic information. After a series of preliminary experiments, it was found that the $g$ function is the best receiver. With this choice and the two TDNN phonetic DNNs, we therefore build the phonetically aware LID system. The results are shown in Table 3 . Several conclusions can be obtained from the results.
The phonetically aware system significantly outperforms the baseline RNN LID system (second row of the results in Table 2 ). This suggests that involving phonetic information with RNN LID has clear benefits.
The phonetically aware system significantly outperforms the multi-task RNN LID (third row of the results in Table 2 ). Note that in the multi-task RNN LID, the phonetic knowledge is used as an auxiliary task to assist the LID RNN training and has shown great benefits. The advantages of the phonetically aware system demonstrated that using the phonetic knowledge to produce phonetic features seems to be a better method than using the knowledge to directly assist model training.
The phonetic DNN trained with Assamese and Georgian data (AG-TDNN-MLT) shows better performance than the one trained with the Switchboard dataset (SWB-TDNN-ASR). This is not surprising as Assamese and Georgian are the two languages chosen to discriminate between in the experiments presented in this section, so AG-TDNN-MLT is more consistent with this LID task. Nevertheless, it is still highly interesting to observe that clear benefits can be obtained by using phonetic features produced by SWB-TDNN-ASR, which is trained with a completely irrelevant dataset, in terms of both languages and environmental conditions. This confirmed our transfer learning perspective theory (as discussed previously), and demonstrated that phonetic features are largely portable and the phonetic DNN can be trained with any data in any languages. This observation is particularly interesting for LID tasks on low-resource languages, as the phonetic DNN can be trained with data from any rich-resource languages.
Babel: PTN for bilingual LID
In the above experiments, the phonetic feature is used as auxiliary information. Here, we evaluate the PTN architecture where the phonetic feature entirely replaces the acoustic features (Fbanks). The experiment is conducted with two phonetic DNN models: AG-TDNN-MLT and SWB-TDNN-ASR.
The results are presented in Table 4 . We first observe that the PTN systems perform as well as the best phonetically aware system in Table 3 , and even better in terms of the utterance-level EER. For better comparison, we also test the special case of the phonetically aware RNN LID (Ph. Aware), where both the phonetic and acoustic features are used as the LID RNN input (Ph+Fb). This is the same as the PTN model, but involves additional acoustic features. The results are shown in the second group of Table 4 . It can be seen that this feature combination does not provide any notable improvement to the results. This means that the phonetic feature is sufficient to represent the distinctiveness of each language, in accordance with our argument that language characters are mostly phonetic.
We also attempted to use the TDNN as the LID model (replacing the RNN) to learn static (rather than temporal) patterns of the phonetic features. We found that this model failed to converge. The same phenomenon was also observed in the AP16-OLR experiment (which will be discussed later in the paper). This is an important observation and it suggests that, with the phonetic feature, only the temporal properties are informative for language discrimination.
Babel: Phonetic knowledge or deep structure?
The good performance using only the phonetic features (i.e. the PTN approach) leads to the question of how this performance advantage in comparison to the RNN LID baseline is obtained. This paper has discussed the phonetic and transfer learning perspectives, which jointly state that the main advantage of PTN is the phonetic knowledge learned through transfer learning. However, another possible reason is that the deeper architecture consisting of both the phonetic DNN and the LID RNN may help to learn more abstract features. If the latter reason is more important, than a similar deep structure with only the LID labels can work similarly well. To answer this question, we design the following three experiments to test the contributions to the results from phonetic information (transfer learning) and deep architecture (deep learning):
TDNN-LSTM. The phonetic DNN, TDNN in the experiment, is initialized randomly and trained together with the LID RNN. This means that the TDNN is not trained with ASR labels, but as part of the LID neural model, and is trained end-to-end.
Pre-trained TDNN-LSTM. The same as TDNN-LSTM, except that the TDNN is initialized by AG-TDNN-MLT.
3-layer LSTM-RNN. The 1-layer LSTM-RNN LID model may be not strong enough to learn useful information from acoustic features, hence leading to the suboptimal performance in Table 2 . We experiment with a 3-layer LSTM-RNN LID system to test if a simple deeper network can obtain the same performance as with the phonetic feature.
The results of these three deep models are shown in Table 5 . The TDNN-LSTM model completely fails. Using the phonetic TDNN as the initialization helps the training, but the results are worse than directly using the phonetic model. This means that the phonetic feature is almost optimal, and does not require any further LID-oriented end-to-end training. Finally, involving more LSTM layers (3-layer LSTM-RNN) does improve the performance a little when compared to the one-layer LSTM baseline ( $7.70$ vs $9.20$ , ref. to Table 2 ). These results indicate that the improvement with the PTN architecture is mainly due to the phonetic information it has learned from the ASR-oriented training (sometimes by multi-task learning), rather than the deep network structure. In other words, it is the transfer learning instead of deep learning that improves LID performance with the PTN architecture.
Babel: PTN on seven languages
We evaluate various LID models on the seven languages of the Babel database. First, the i-vector and LSTM-RNN LID baselines are presented. For the i-vector system, linear discriminative analysis (LDA) is employed to promote language-related information before training SVMs. The dimensionality of the LDA projection space is set to 6. For the phonetically aware RNN and the PTN systems, two phonetic DNNs are evaluated, AG-TDNN-MLT and SWB-TDNN-ASR. For the phonetically aware system, the $g$ function of the LSTM-RNN LID model is chosen as the receiver. The results are shown in Table 6 . It can be seen that both the phonetically aware and the PTN systems outperform the i-vector baseline and the acoustic RNN LID baseline, and that the PTN system with the AG-TDNN-MLT phonetic DNN performs the best. The SWB-TDNN-ASR performs slightly worse than AG-TDNN-MLT, indicating that familiarity with the language and the environment is beneficial when discriminating between languages. However, phonetic DNNs trained with data in foreign languages and in mismatched environment conditions (e.g., SWB-TDNN-ASR) still work well.
AP16-OLR: PTN on seven languages
In this section, we test the phonetic RNN LID approach on the AP16-OLR database. Compared to the Babel database, the speech signals in AP16-OLR are broadband (sampling rate of 16k Hz), and the acoustic environment is less noisy. Additionally, the speech data of each language is much more limited (10 hours per language), so we assume that training a phonetic DNN model is not feasible with the data of the target languages. We therefore utilize transfer learning, i.e., using phonetic DNNs trained on data in other languages.
All the test conditions are the same as in the 7 language Babel experiment. We trained two phonetic DNNs: one is a TDNN model of the same size as the AG-TDNN-ASR model in Section "Babel: phonetically aware bilingual LID" , but trained on the WSJ database, denoted by `WSJ-TDNN-ASR'. The other is also a TDNN, but is taken from an industry project, trained on a speech database involving $10,000$ hours of Chinese speech signals with 40 dimensional Fbanks. The network contains 7 rectifier TDNN layers, each containing $1,200$ hidden units. This model is denoted by `CH-TDNN-ASR'. The weight matrix of the last hidden layer in CH-TDNN-ASR is decomposed by SVD, where the low rank is set to 400. The 400-dimensional activations are read from the low-rank layer and are used as the phonetic feature.
The test results on the seven languages in the database are shown in Table 7 . It can be seen that the phonetic RNN LID models, either the phonetically aware RNN or the PTN approach, significantly outperform the acoustic RNN baseline system. The PTN system seems much more effective, which differs from the Babel database results. This may be attributed to the limited training data, so the simpler PTN architecture is preferred. Comparing the WSJ-based phonetic DNN and the Chinese phonetic DNN, the Chinese model is better. This may be attributed to several reasons: (1) the Chinese database contains a larger volume of training data; (2) Chinese is one of the seven languages in AP16-OLR; (3) Chinese is more similar to the remaining 6 target languages in comparison to English, as most of the languages in AP16-OLR are oriental languages.
Another observation is that the i-vector system outperforms the phonetic RNN systems in the AP16-OLR experiment, which is inconsistent with the observations in the Babel experiment, where both the phonetic systems, significantly outperform the i-vector system. This discrepancy can be attributed to the different data profiles of the two databases, with two possible key factors: (1) the utterances of AP16-OLR are longer than Babel, making the i-vector system more effective; (2) the speech signals of AP16-OLR are cleaner than those of Babel. The RNN system is more robust against noise, and this advantage is less prominent with clean data. We will examine the two conjectures in the following experiments.
AP16-OLR: utterance duration effect
To show the relative advantage of the RNN and the i-vector systems on utterances of different length, we select the utterances of at least 5 seconds from the AP16-OLR test set, and create 10 test sets by dividing them into small utterances of different durations, from $0.5$ seconds to 5 seconds, in steps of $0.5$ seconds. Each group contains $5,907$ utterances, and each utterance in a group is a random segment excerpted from the original utterance.
The performance of the i-vector and PTN systems on the 10 test sets are shown in Fig. 4 , in terms of $C_{avg}$ and EER respectively. It is clear that the PTN system is more effective on short utterances, and if the utterance duration is more than 3 seconds, the i-vector system is the best performer, especially in terms of EER.
The duration distribution of the test utterances of the Babel database and the AP16-OLR database are shown in Fig. 5 . It is clear that the test utterances are generally longer in AP16-OLR than in Babel. This explains why the relative performance of the i-vector system and the RNN system is inconsistent between the two databases.
AP16-OLR: noise robustness
Finally, we test the hypothesis that the RNN system is more robust against noise. Firstly white noise is added to the AP16-OLR test set at different SNR levels, and the noise-augmented data are tested on two systems: the i-vector baseline and the best performing PTN system from Table 7 , i.e. with CH-TDNN-ASR as the phonetic DNN. The results of these two systems with different levels of white noise are shown in Table 8 . It can be seen that the PTN system is more noise-robust: with more noise corruption, the gap between the i-vector system and the PTN system becomes less significant, and the PTN system is better than the i-vector system in terms of $C_{avg}$ when the noise level is high (SNR=10). This can be observed more clearly in Fig. 6 , where the performance degradation rates compared to the noise-free condition are shown. The figure shows that when the noise increases, the performance degradation with the PTN system is less significant compared to the degradation with the i-vector system. As the Babel speech data is much more noisy than the AP16-OLR speech, this noise robustness with the PTN approach partly explains why the relative performance is inconsistent between the two databases.
Conclusions
This paper proposed a phonetic temporal neural (PTN) approach for language identification. In this approach, acoustic features are substituted for phonetic features to build an RNN LID model. Our experiments conducted on the Babel and AP16-OLR databases demonstrated that the PTN approach can provide dramatic performance improvement over the baseline RNN LID system, with even better results than a phonetically aware approach that treats the phonetic feature as additional auxiliary information. This demonstrated that phonetic temporal information is much more informative than raw acoustic information for discriminating between languages. This was a long-standing belief of LID researchers in the PRLM era, but has been doubted since the increased popularity and utilization of the i-vector approach in recent years. Future work will improve the performance of the neural LID approach on long sentences, by enabling the LSTM-RNN to learn long-time patterns, e.g., by multi-scale RNNs BIBREF44 . | The three baseline models are the i-vector model, a standard RNN LID system and a multi-task RNN LID system. |
1db37e98768f09633dfbc78616992c9575f6dba4 | 1db37e98768f09633dfbc78616992c9575f6dba4_0 | Q: How big is the Babel database?
Text: Introduction
Language identification (LID) lends itself to a wide range of applications, such as mixed-lingual (code-switching) speech recognition. Humans use many cues to discriminate languages, and better accuracy can be achieved with the use of more cues. Various LID approaches have been developed, based on different types of cues.
Cues for language identification
There are more than 5000 languages in the world, and each language has distinct properties at different levels, from acoustic to semantics BIBREF0 , BIBREF1 , BIBREF2 . A number of studies have investigated how humans use these properties as cues to distinguish between languages BIBREF3 . For example, Muthusamy BIBREF4 found that familiarity with a language is an important factor affecting LID accuracy, and that longer speech samples are easier to identify. Moreover, people can easily tell what cues they use for identification, including phonemic inventory, word usage, and prosody. More thorough investigations were conducted by others by modifying speech samples to promote one or several factors. For example, Mori et al. BIBREF5 found that people are able to identify Japanese and English fairly reliably even when phone information is reduced. They argued that other non-linguistic cues such as intensity and pitch were used to decide the language. Navratil BIBREF6 evaluated the importance of various types of knowledge, including lexical, phonotactic and prosodic, by asking humans to identify five languages, Chinese, English, French, German and Japanese. Subjects were presented with unaltered speech samples, samples with randomly altered syllables, and samples with the vocal-tract information removed to leave only the F0 and amplitude. Navratil found that the speech samples with random syllables are more difficult to identify compared to the original samples (73.9% vs 96%), and removing vocal-tract information leads to significant performance reduction (73.9% vs 49.4%). This means that with this 5-language LID task, the lexical and phonotactic information is important for human decision making.
The LID experiments summarised above suggest that languages can be discriminated by multiple cues at different levels, and the cues used to differentiate different language pairs are different. In general, the cues can be categorized into three levels: feature level, token level and prosody level. At the feature level, different languages have their own implementation of phones, and the transitions between phones are also different. This acoustic speciality is a short-time property and can be identified by certain spectral analysis and feature extraction of our auditory system. At the token level, the distribution and transition patterns of linguistic tokens at various levels are significantly different. The tokens can be phones/phonemes, syllables, words or even syntactic or semantic tags. At the prosody level, the duration, pitch and stress patterns often differ between languages. For example, patterns of stress can provide an important cue for discriminating between two stressed languages, duration can also be potentially useful, and the tone patterns of syllables or words offer a clear cue to discriminate between tonal languages.
LID approaches
Based on the different types of cues, multiple LID approaches have been proposed. Early work generally focused on feature-level cues. Feature-based methods use strong statistical models built on raw acoustic features to make the LID decision. For instance, Cimarusti used LPC features BIBREF7 , and Foil et al. BIBREF8 investigated formant features. Dynamic features that involve temporal information were also demonstrated to be effective BIBREF9 . The statistical models used include Gaussian mixture models (GMMs) BIBREF10 , BIBREF11 , hidden Markov models (HMMs) BIBREF12 , BIBREF13 , neural networks (NNs) BIBREF14 , BIBREF15 , and support vector machines (SVMs) BIBREF16 . More recently, a low-rank GMM model known as the i-vector model was proposed and achieved significant success BIBREF17 , BIBREF18 . This model constrains the mean vectors of the GMM components in a low-dimensional space to improve the statistical strength for model training, and uses a task-oriented discriminative model (e.g., linear discriminative analysis, LDA) to improve the decision quality at run-time, leading to improved LID performance. Due to the short-time property of the features, most feature-based methods model the distributional characters rather than the temporal characters of speech signals.
The token-based approach is based on the characters of high-level tokens. Since the dynamic properties of adjacent tokens are more stable than adjacent raw features, temporal characters can be learned with the token-based approach, in additional to the distributional characters. A typical approach is to convert speech signals into phone sequences, and then build an n-gram language model (LM) for each target language to evaluate the confidence that the input speech matches that language. This is the famous phone recognition and language modelling (PRLM) approach. Multiple PRLM variants have been proposed, such as parallel phone recognition followed by LM (PPRLM) BIBREF19 , BIBREF20 , and phone recognition on a multilingual phone set BIBREF21 . Other tokens such as syllables BIBREF22 and words BIBREF23 , BIBREF24 have also been investigated.
The prosody-based approach utilizes patterns of duration, pitch, and stress to discriminate between languages. For example, Foil et al. BIBREF8 studied formant and prosodic features and found formant features to be more discriminative. Rouas et al. BIBREF25 modeled pure prosodic features by GMMs and found that their system worked well on read speech, but could not deal with the complexity of spontaneous speech prosody. Muthusamy BIBREF15 used pitch variation, duration and syllable rate. Duration and pitch patterns were also used by Hazen BIBREF21 . In most cases, the prosodic information is used as additional knowledge to improve feature or token-based LID.
Most of the above methods, no matter what information is used, heavily rely on probabilistic models to accumulate evidence from a long speech segment. For example, the PRLM method requires an n-gram probability of the phonetic sequence, and the GMM/i-vector method requires the distribution of the acoustic feature. Therefore, these approaches require long test utterances, leading to inevitable latency in the LID decision. This latency is a serious problem for many practical applications, e.g., code-switching ASR, where multiple languages may be contained within a single block of speech. For quick LID, frame-level decision is highly desirable, which therefore cannot rely on probabilistic models.
The recently emerging deep learning approach solves this problem by using various deep neural networks (DNNs) to produce frame-level LID decisions. An early successful deep neural model was developed by Lopez-Moreno et al. BIBREF26 , who proposed an approach based on a feed-forward deep neural network (FFDNN), which accepts raw acoustic features and produces frame-level LID decisions. The score for utterance-based decision is calculated by averaging the scores of the frame-level decisions. This was extended by others with the use of various neural model structures, e.g., CNN BIBREF27 , BIBREF28 and TDNN BIBREF29 , BIBREF30 . These DNN models are feature-based, but they consider a large context window, and can therefore learn the feature's temporal information, which is not possible with conventional feature-based models (such as the i-vector model), that only learn distributional information. The temporal information can be better learned by recurrent neural networks (RNNs), as proposed by Gonzalez-Dominguez et al. BIBREF31 . Using an RNN structure based on the long-short term memory unit (LSTM), the authors reported better performance with fewer parameters. This RNN approach was further developed by others, e.g., BIBREF32 , BIBREF33 .
It should be noted that DNNs have been used in other ways in LID. For example, Song et al. BIBREF34 used a DNN to extract phonetic feature for the i-vector system, and Ferrer et al. BIBREF35 proposed a DNN i-vector approach that uses posteriors produced by a phone-discriminative FFDNN to compute the Baum-Welch statistics. Tian et al. BIBREF36 extended this by using an RNN to produce the posteriors. These methods all use neural models as part of the system, but their basic framework is still probabilistic, so they share the same problem of decision latency. In this paper, we focus on the pure neural approach that uses neural models as the basic framework, so that short-time language information can be learned by frame-level discriminative training.
Motivation of the paper
All the present neural LID methods are based on acoustic features, e.g., Mel filter banks (Fbanks) or Mel frequency cepstral coefficients (MFCCs), with phonetic information largely overlooked. This may have significantly hindered the performance of neural LID. Intuitively, it is a long-standing hypothesis that languages can be discriminated between by phonetic properties, either distributional or temporal; additionally, phonetic features represent information at a higher level than acoustic features, and so are more invariant with respect to noise and channels. Pragmatically, it has been demonstrated that phonetic information, either in the form of phone sequences, phone posteriors, or phonetic bottleneck features, can significantly improve LID accuracy in both the conventional PRLM approach BIBREF11 and the more modern i-vector system BIBREF34 , BIBREF35 , BIBREF36 . In this paper, we will investigate the utilization of phonetic information to improve neural LID. The basic concept is to use a phone-discriminative model to produce frame-level phonetic features, and then use these features to enhance RNN LID systems that were originally built with raw acoustic features. The initial step is therefore feature combination, with the phonetic feature used as auxiliary information to assist acoustic RNN LID. This is improved further, as additional research identified that a simpler model using only the phonetic feature as the RNN LID input provides even better performance. We call this RNN model based on phonetic features the phonetic temporal neural LID approach, or PTN LID. As well as having a simplified model structure, the PTN offers deeper insight into the LID task by rediscovering the value of the phonetic temporal property in language discrimination. This property was historically widely and successfully applied in token-based approaches, e.g., PRLM BIBREF11 , but has been largely overlooked due to the popularity of the i-vector approach.
Table 1 summarizes different systems that use deep neural models in LID. The probabilistic approach uses DNNs as part of a probabilistic system, e.g., GMM or i-vector, while the neural approach uses various types of DNNs as the decision architecture. Both approaches may use either acoustic features or phonetic features. The proposed PTN approach is at the bottom-right of the table.
Paper organization
The remainder of the paper is organized as follows: the model structures of the PTN approach will be presented in Section "Phonetic neural modelling for LID" , which is followed by the implementation details in Section "Model structure" . The experiments and results are reported in Section "Experiments" , and some conclusions and future work will be presented in Section "Conclusions" .
Phonetic neural modelling for LID
In this section, we present the models that employ phonetic information for RNN LID. Although the phonetically aware approach treats phonetic information as auxiliary knowledge, the PTN approach uses phonetic information as the only input into the RNN LID system. Both are depicted in Fig. 1 .
Phonetically aware acoustic neural model
The instinctive idea for utilizing phonetic information in the RNN LID system is to treat it as auxiliary knowledge, which we call a phonetically aware approach. Intuitively, this can be regarded as a knowledge-fusion method that uses both the phonetic and acoustic features to learn LID models. Fig. 1 (a) shows this model. A phonetic DNN model (this may be in any structure, such as FFDNN, RNN, TDNN) is used to produce frame-level phonetic features. These can be read from anywhere in the phonetic DNN, such as the output, or the last hidden layer, and then be propagated to the LID model, an LSTM-RNN in our study. This propagated phonetic information can be accepted by the LID model in different ways. For example, it can be part of the input, or as an additional term of the gate or non-linear activation functions.
Phonetic temporal neural model
The second model, which we call the PTN model, completely replaces the acoustic feature with the phonetic feature, and thus entirely relies on the properties of the phonetic representation. This learning is based on the RNN model, therefore the temporal patterns of the phonetic features can be learned. This PTN system is shown in Fig. 1 (b). Although the PTN model is a special, `aggressive' case of the phonetically aware approach, the success of this model offers a deeper insight into the LID task as it rediscovers the importance of the temporal properties of phonetic representations.
Understanding the PTN approach
The rationality of the PTN approach can be understood from two perspectives: the phonetic perspective, which relates to what information is important, and the transfer learning perspective, which relates to how this information is learned.
Phonetic perspective: The PTN approach adopts the long-standing hypothesis (as used by the PRLM model) that languages should be discriminated by phonetic rather than spectral properties. However this has been largely overlooked since the success of the i-vector approach, which achieved good performance using only raw acoustic features. However, Song et al. BIBREF34 recently rediscovered the value of phonetic features in the i-vector model. The PTN approach proposed here follows the same idea and rediscovers the value of phonetic features in the neural model. We argue that this value is more important for the neural model than for the probabilistic model (e.g., i-vector), as its decision is based on only a small number of frames, and thus requires that the feature involves more language-related information and less noise and uncertainties. The i-vector model, in contrast, can utilize more speech signals, hence can discover language-related information from the distributional patterns even with raw acoustic features.
Both the PTN approach and the historical token-based approach share the same idea of utilizing phonetic information and modelling the temporal patterns, but they are fundamentally different. Firstly, the phonetic information in the PTN approach is frame-level, while in conventional token-based methods this information is unit-level. Therefore, the PTN approach can represent phonetic properties at a higher temporal resolution. Secondly, conventional token-based methods represent phonetic information as sequences derived from phone recognition, while the PTN approach represents phonetic information as a feature vector that involves information contributed by all phones, and thus more detailed phonetic information is represented. Finally, the back-end model of the conventional token-based approach is an n-gram LM based on discrete tokens and trained with the maximum likelihood (ML) criterion, while the back-end model of the PTN approach is an RNN, which functions similarly to an RNN LM, but is based on continuous phonetic features, and trained with a task-oriented criterion that discriminates the target languages.
Transfer learning perspective: The second perspective to understand the PTN approach is from the transfer learning perspective BIBREF37 . It is well known that DNNs perform very well at learning task-oriented features from raw data. This is the hypothesis behind conventional acoustic RNN LID methods: if the neural model is successfully trained, it can learn any useful information from the raw acoustic features layer by layer, including the phonetic information. It therefore initially seems unnecessary to design our PTN phonetic feature learning and modelling architecture. However, we argue that using the language labels alone to learn LID-related information from raw acoustic features is highly ineffective, because these labels are too coarse to provide sufficient supervision. With the PTN model, feature extraction is trained on speech data labelled with phones or words which are highly informative and fine-grained (compared to language labels), leading to a strong DNN model for phonetic feature extraction. Importantly, phone discrimination and language identification are naturally correlated (from our phonetic perspective), which means that the phonetic features learned with the strong phone/word supervision involves rich information suitable for LID. This is an example of transfer learning, where a related task (i.e., phone discrimination) is used to learn features for another task (LID).
The PTN approach also involves another two transfer learning schemes: cross language and cross condition (databases). This means that the phonetic DNN can be learned with any speech data in any language. This property was identified in token-based LID BIBREF19 , however it is more important for the phonetic neural models, as training the phonetic DNN requires a large amount of speech data which is often not available for the target languages and the operating conditions under test. Moreover, it is also possible to train the phonetic DNN with multilingual, multi-conditional data BIBREF38 , resulting in robust and reliable phonetic feature extraction.
In summary, the PTN approach utilizes a detailed phonetic representation (DNN phonetic feature), and a powerful temporal model (LSTM-RNN) to capture the phonetic temporal properties of a language with a high temporal resolution. It also utilizes three types of transfer learning to ensure that the phonetic feature is representative and robust. Our PTN approach is therefore very powerful and flexible, and reconfirms the belief of many LID researchers that phonetic temporal information is highly valuable in language discrimination, not only for humans but also for machines.
Model structure
This section presents the details of the phonetic neural LID models, including both the phonetically aware model and the PTN model. The phonetic DNN can be implemented in various DNN structures, and here we choose the TDNN BIBREF39 which can learn long-term phonetic patterns and performed well in our experiments.
For the LID neural model, we choose the LSTM-RNN. One reason for this choice is that LSTM-RNN has been demonstrated to perform well in both the pure neural LID approach BIBREF31 and the neural-probabilistic hybrid LID approach BIBREF36 . Another reason is that the RNN model can learn the temporal properties of speech signals, which is in accordance with our motivation to model the phonetic dynamics, as in the conventional PRLM approach BIBREF20 . We first describe the LSTM-RNN structure used for LID, and then present the model structures of the phonetically aware acoustic RNN model and PTN model.
LSTM-RNN LID
The LSTM-RNN model used in this study is a one-layer RNN model, where the hidden units are LSTM. The structure proposed by Sak et al. BIBREF40 is used, as shown in Fig. 2 .
The associated computation is given as follows:
$$i_t &=& \sigma (W_{ix}x_{t} + W_{ir}r_{t-1} + W_{ic}c_{t-1} + b_i) \nonumber \\ f_t &=& \sigma (W_{fx}x_{t} + W_{fr}r_{t-1} + W_{fc}c_{t-1} + b_f) \nonumber \\ c_t &=& f_t \odot c_{t-1} + i_t \odot g(W_{cx}x_t + W_{cr}r_{t-1} + b_c) \nonumber \\ o_t &=& \sigma (W_{ox}x_t + W_{or}r_{t-1} + W_{oc}c_t + b_o) \nonumber \\ m_t &=& o_t \odot h(c_t) \nonumber \\ r_t &=& W_{rm} m_t \nonumber \\ p_t &=& W_{pm} m_t \nonumber \\ y_t &=& W_{yr}r_t + W_{yp}p_t + b_y \nonumber $$ (Eq. 13)
In the above equations, the $W$ terms denote weight matrices, and those associated with the cells were constrained to be diagonal in our implementation. The $b$ terms denote bias vectors. $x_t$ and $y_t$ are the input and output symbols respectively; $i_t$ , $f_t$ , $o_t$ represent the input, forget and output gates, respectively; $c_t$ is the cell and $m_t$ is the cell output. $r_t$ and $b$0 are two output components derived from $b$1 , where $b$2 is recurrent and fed to the next time step, while $b$3 is not recurrent and contributes to the present output only. $b$4 is the logistic sigmoid function, and $b$5 and $b$6 are non-linear activation functions, chosen to be hyperbolic. $b$7 denotes element-wise multiplication.
In this study, the LSTM layer consists of $1,024$ cells, and the dimensionality of both the recurrent and non-recurrent projections is set to 256. The natural stochastic gradient descent (NSGD) algorithm BIBREF41 was employed to train the model. During the training and decoding, the cells were reset for each 20 frames to ensure only short-time patterns are learned.
Phonetically aware neural LID
In the phonetically aware model, the phonetic feature is read from the phonetic DNN and is propagated to the LID RNN as additional information to assist the acoustic neural LID. The phonetic feature can be read either from the output (phone posterior) or the last hidden layer (logits), and can be propagated to different components of the RNN LID model, e.g., the input/forget/output gates and/or the non-linear activation functions.
Fig. 3 (a) illustrates a simple configuration, where the phonetic DNN is a TDNN model, and the feature is read from the last hidden layer. The phonetic feature is propagated to the non-linear function $g(\cdot )$ . With this configuration, calculation of the LID RNN is similar, except that the cell value should be updated as follows: $ c_t = f_t \odot c_{t-1} + i_t \odot g(W_{cx}x_t + W_{cr}r_{t-1} + \underline{W^{\prime }_{c\phi }\phi _{t}} + b_c) $
where $\phi _t$ is the phonetic feature obtained from the phonetic DNN.
Phonetic temporal neural (PTN) LID
The phonetically aware acoustic RNN model is an acoustic-based approach, with the phonetic feature used as auxiliary information. In contrast, the PTN approach assumes that the phonetic temporal properties cover most of the information for language discrimination, so the acoustic feature is not important any more. Therefore, it removes all acoustic features and uses the phonetic features as the only input of the LID RNN, as shown in Fig. 3 (b).
It is interesting to compare the PTN approach with other LID approaches. Firstly, it can be regarded as a new version of the conventional PRLM approach, particularly the recent PRLM implementation using RNN as the LM BIBREF42 . The major difference is that the PTN approach uses frame-level phonetic features while the PRLM approach uses token-level phonetic sequences; in addition, the phonetic information in the PTN approach is much richer than for PRLM, as it is represented as a continuous phonetic vector rather than discrete phonetic symbols.
The PTN approach is also correlated to the neural-probabilistic hybrid approach, where the phonetic DNN is used to produce phonetic features, from which the GMM or i-vector model is constructed. The PTN approach uses the same phonetic features, but employs an RNN model to describe the dynamic property of the feature, instead of modelling the distributional property using GMM or i-vector models. As will be discussed in the next section, temporal modelling is very important for phonetic neural models.
Finally, compared to the conventional acoustic RNN LID model, the PTN model uses phonetic features rather than acoustic features. Since the phonetic features can be learned with a very large speech database, they are much more robust against noise and uncertainties (e.g., speaker traits and channel distortions) than the raw acoustic features. This suggests that the PTN approach is more robust against noise than the conventional acoustic RNN approach.
Databases and configurations
The experiments were conducted on two databases: the Babel database and the AP16-OLR database. The Babel database was collected as part of the IARPA (Intelligence Advanced Research Projects Activity) Babel program, which aimed to develop speech technologies for low-resource languages. The sampling rate is 8 kHz and the sample size is 16 bits. In this paper, we chose speech data from seven languages in the Babel database: Assamese, Bengali, Cantonese, Georgian, Pashto Tagalog and Turkish. For each language, an official training and development dataset were provided. The training datasets contain both conversational and scripted speech, and the development datasets only contain conversational speech. We used the entire training set of each language for model training, but randomly selected $2,000$ utterances from the development set of each language to perform testing.
The training data sets from the seven languages are as follows: Assamese 75 hours, Bengali 87 hours, Cantonese 175 hours, Georgian 64 hours, Pashto 111 hours, Tagalog 116 hours and Turkish 107 hours. The average duration of the test utterances is $4.15$ seconds, ranging from $0.19$ seconds to $30.85$ seconds.
The AP16-OL7 database was originally created by Speechocean Inc., targeted towards various speech processing tasks (mainly speech recognition), and was used as the official data for the AP16-OLR LID challenge. The database contains seven datasets, each in a particular language. These are: Mandarin, Cantonese, Indonesian, Japanese, Russian, Korean and Vietnamese. The data volume for each language is approximately 10 hours of speech signals recorded by 24 speakers (12 males and 12 females), with each speaker recording approximately 300 utterances in reading style by mobile phones, with a sampling rate of 16kHz and a sample size of 16 bits. Each dataset was split into a training set consisting of 18 speakers, and a test set consisting of 6 speakers. For Mandarin, Cantonese, Vietnamese and Indonesian, the recording was conducted in a quiet environment. For Russian, Korean and Japanese, there are 2 recording conditions for each speaker, quiet and noisy. The average duration (including silence) of all the $12,939$ test utterances of the seven languages is $4.74$ seconds, ranging from $1.08$ seconds to $18.06$ seconds.
The phonetic DNN is a TDNN structure, and the LID model is based on the LSTM-RNN. The raw feature used for those models consists of 23-dimensional Fbanks, with a symmetric 2-frame window for RNN and a symmetric 4-frame window for TDNN to splice neighboring frames. All the experiments were conducted with Kaldi BIBREF43 . The default configurations of the Kaldi WSJ s5 nnet3 recipe were used to train the phonetic DNN and the LID RNN. We first report experiments based on the Babel database, and then experiments with the AP16-OLR database.
Babel: baseline of bilingual LID
As the first step, we build three baseline LID systems, one based on the i-vector model, and the other two based on LSTM-RNN, using the speech data of two languages from Babel: Assamese and Georgian (AG).
For the i-vector baseline, the UBM involves $2,048$ Gaussian components and the dimensionality of the i-vectors is 400. The static acoustic features consists of 12-dimensional MFCCs and the log energy. These static features are augmented by their first and second order derivatives, resulting in 39-dimensional feature vectors. In our experiment, we train an SVM for each language to determine the score of a test i-vector belonging to that language. The SVMs are trained on the i-vectors of all training segments, following the one-versus-rest strategy.
The two RNN LID baselines are: a standard RNN LID system (AG-RNN-LID) that discriminates between the two languages in its output, and a multi-task system (AG-RNN-MLT) that was trained to discriminate between the two languages as well as the phones. More precisely, the output units of the AG-RNN-MLT are separated into two groups: an LID group that involves two units corresponding to Assamese and Georgian respectively, and an ASR group that involves $3,349$ bilingual senones that are inherited from an HMM/GMM ASR system trained with the speech data of Assamese and Georgian, following the standard WSJ s5 HMM/GMM recipe of Kaldi. The WSJ s5 nnet3 recipe of Kaldi is then used to train the AG-RNN-LID and AG-RNN-MLT systems.
The LID task can be conducted by either AG-RNN-LID or AG-RNN-MLT (using the LID output group) at the frame-level (denoted as `Fr.'), using the frame-level language posteriors they produce. To evaluate the utterance-level (denoted as `Utt.') performance, the frame-level posteriors are averaged to form the utterance-level posterior, by which the language decision can be made.
The performance results with the three baseline systems, in terms of $C_{avg}$ and equal error rate (EER), are shown in Table 2 . The results indicate that both the LID RNN and the multi-task LID RNN are capable of language discrimination, and the multi-task RNN significantly outperforms both the LID RNN and the i-vector baseline. This indicates that the phone information is very useful for neural LID, even if simply used as an auxiliary objective in the model training, hence supporting our transfer learning perspective, as described in Section "Phonetic neural modelling for LID" .
The multi-task learning approach is an interesting way to involve phonetic information in LID. However, it has the limitation of requiring the training data to be labelled in both languages and words/phones. This is very costly and not feasible in most scenarios. The phonetic neural models (the phonetically aware model and the PTN model) do not suffer from this problem.
Babel: phonetically aware bilingual LID
The phonetically aware architecture uses phonetic features as auxiliary information to improve the RNN LID. We experimented with various architectures for the phonetic DNN, and found that the TDNN structure is a good choice. In this experiment, the TDNN structure is composed of 6 time-delay layers, with each followed by a p-norm layer that reduces the dimensionality of the activation from $2,048$ to 256, the same dimension as the recurrent layer of the LID LSTM-RNN. The activations of the last hidden layer in the TDNN are read out as the phonetic feature.
Two TDNN models are trained. The AG-TDNN-MLT model is a multi-task model trained with the Assamese and Georgian data, and there are two groups of output targets, phone labels and language labels. The ASR performance (WER) of the AG-TDNN-MLT model is $66.4\%$ and $64.2\%$ for Assamese and Georgian respectively. The SWB-TDNN-ASR model is an ASR model trained with the Switchboard database. This database involves 317 hours of telephone speech signals in English, recorded from $4,870$ speakers. The ASR performance (WER) of SWB-TDNN-ASR is $20.8\%$ on the Eval2000 dataset.
Another design decision that had to be made was to choose which component in the LID RNN will receive the phonetic information. After a series of preliminary experiments, it was found that the $g$ function is the best receiver. With this choice and the two TDNN phonetic DNNs, we therefore build the phonetically aware LID system. The results are shown in Table 3 . Several conclusions can be obtained from the results.
The phonetically aware system significantly outperforms the baseline RNN LID system (second row of the results in Table 2 ). This suggests that involving phonetic information with RNN LID has clear benefits.
The phonetically aware system significantly outperforms the multi-task RNN LID (third row of the results in Table 2 ). Note that in the multi-task RNN LID, the phonetic knowledge is used as an auxiliary task to assist the LID RNN training and has shown great benefits. The advantages of the phonetically aware system demonstrated that using the phonetic knowledge to produce phonetic features seems to be a better method than using the knowledge to directly assist model training.
The phonetic DNN trained with Assamese and Georgian data (AG-TDNN-MLT) shows better performance than the one trained with the Switchboard dataset (SWB-TDNN-ASR). This is not surprising as Assamese and Georgian are the two languages chosen to discriminate between in the experiments presented in this section, so AG-TDNN-MLT is more consistent with this LID task. Nevertheless, it is still highly interesting to observe that clear benefits can be obtained by using phonetic features produced by SWB-TDNN-ASR, which is trained with a completely irrelevant dataset, in terms of both languages and environmental conditions. This confirmed our transfer learning perspective theory (as discussed previously), and demonstrated that phonetic features are largely portable and the phonetic DNN can be trained with any data in any languages. This observation is particularly interesting for LID tasks on low-resource languages, as the phonetic DNN can be trained with data from any rich-resource languages.
Babel: PTN for bilingual LID
In the above experiments, the phonetic feature is used as auxiliary information. Here, we evaluate the PTN architecture where the phonetic feature entirely replaces the acoustic features (Fbanks). The experiment is conducted with two phonetic DNN models: AG-TDNN-MLT and SWB-TDNN-ASR.
The results are presented in Table 4 . We first observe that the PTN systems perform as well as the best phonetically aware system in Table 3 , and even better in terms of the utterance-level EER. For better comparison, we also test the special case of the phonetically aware RNN LID (Ph. Aware), where both the phonetic and acoustic features are used as the LID RNN input (Ph+Fb). This is the same as the PTN model, but involves additional acoustic features. The results are shown in the second group of Table 4 . It can be seen that this feature combination does not provide any notable improvement to the results. This means that the phonetic feature is sufficient to represent the distinctiveness of each language, in accordance with our argument that language characters are mostly phonetic.
We also attempted to use the TDNN as the LID model (replacing the RNN) to learn static (rather than temporal) patterns of the phonetic features. We found that this model failed to converge. The same phenomenon was also observed in the AP16-OLR experiment (which will be discussed later in the paper). This is an important observation and it suggests that, with the phonetic feature, only the temporal properties are informative for language discrimination.
Babel: Phonetic knowledge or deep structure?
The good performance using only the phonetic features (i.e. the PTN approach) leads to the question of how this performance advantage in comparison to the RNN LID baseline is obtained. This paper has discussed the phonetic and transfer learning perspectives, which jointly state that the main advantage of PTN is the phonetic knowledge learned through transfer learning. However, another possible reason is that the deeper architecture consisting of both the phonetic DNN and the LID RNN may help to learn more abstract features. If the latter reason is more important, than a similar deep structure with only the LID labels can work similarly well. To answer this question, we design the following three experiments to test the contributions to the results from phonetic information (transfer learning) and deep architecture (deep learning):
TDNN-LSTM. The phonetic DNN, TDNN in the experiment, is initialized randomly and trained together with the LID RNN. This means that the TDNN is not trained with ASR labels, but as part of the LID neural model, and is trained end-to-end.
Pre-trained TDNN-LSTM. The same as TDNN-LSTM, except that the TDNN is initialized by AG-TDNN-MLT.
3-layer LSTM-RNN. The 1-layer LSTM-RNN LID model may be not strong enough to learn useful information from acoustic features, hence leading to the suboptimal performance in Table 2 . We experiment with a 3-layer LSTM-RNN LID system to test if a simple deeper network can obtain the same performance as with the phonetic feature.
The results of these three deep models are shown in Table 5 . The TDNN-LSTM model completely fails. Using the phonetic TDNN as the initialization helps the training, but the results are worse than directly using the phonetic model. This means that the phonetic feature is almost optimal, and does not require any further LID-oriented end-to-end training. Finally, involving more LSTM layers (3-layer LSTM-RNN) does improve the performance a little when compared to the one-layer LSTM baseline ( $7.70$ vs $9.20$ , ref. to Table 2 ). These results indicate that the improvement with the PTN architecture is mainly due to the phonetic information it has learned from the ASR-oriented training (sometimes by multi-task learning), rather than the deep network structure. In other words, it is the transfer learning instead of deep learning that improves LID performance with the PTN architecture.
Babel: PTN on seven languages
We evaluate various LID models on the seven languages of the Babel database. First, the i-vector and LSTM-RNN LID baselines are presented. For the i-vector system, linear discriminative analysis (LDA) is employed to promote language-related information before training SVMs. The dimensionality of the LDA projection space is set to 6. For the phonetically aware RNN and the PTN systems, two phonetic DNNs are evaluated, AG-TDNN-MLT and SWB-TDNN-ASR. For the phonetically aware system, the $g$ function of the LSTM-RNN LID model is chosen as the receiver. The results are shown in Table 6 . It can be seen that both the phonetically aware and the PTN systems outperform the i-vector baseline and the acoustic RNN LID baseline, and that the PTN system with the AG-TDNN-MLT phonetic DNN performs the best. The SWB-TDNN-ASR performs slightly worse than AG-TDNN-MLT, indicating that familiarity with the language and the environment is beneficial when discriminating between languages. However, phonetic DNNs trained with data in foreign languages and in mismatched environment conditions (e.g., SWB-TDNN-ASR) still work well.
AP16-OLR: PTN on seven languages
In this section, we test the phonetic RNN LID approach on the AP16-OLR database. Compared to the Babel database, the speech signals in AP16-OLR are broadband (sampling rate of 16k Hz), and the acoustic environment is less noisy. Additionally, the speech data of each language is much more limited (10 hours per language), so we assume that training a phonetic DNN model is not feasible with the data of the target languages. We therefore utilize transfer learning, i.e., using phonetic DNNs trained on data in other languages.
All the test conditions are the same as in the 7 language Babel experiment. We trained two phonetic DNNs: one is a TDNN model of the same size as the AG-TDNN-ASR model in Section "Babel: phonetically aware bilingual LID" , but trained on the WSJ database, denoted by `WSJ-TDNN-ASR'. The other is also a TDNN, but is taken from an industry project, trained on a speech database involving $10,000$ hours of Chinese speech signals with 40 dimensional Fbanks. The network contains 7 rectifier TDNN layers, each containing $1,200$ hidden units. This model is denoted by `CH-TDNN-ASR'. The weight matrix of the last hidden layer in CH-TDNN-ASR is decomposed by SVD, where the low rank is set to 400. The 400-dimensional activations are read from the low-rank layer and are used as the phonetic feature.
The test results on the seven languages in the database are shown in Table 7 . It can be seen that the phonetic RNN LID models, either the phonetically aware RNN or the PTN approach, significantly outperform the acoustic RNN baseline system. The PTN system seems much more effective, which differs from the Babel database results. This may be attributed to the limited training data, so the simpler PTN architecture is preferred. Comparing the WSJ-based phonetic DNN and the Chinese phonetic DNN, the Chinese model is better. This may be attributed to several reasons: (1) the Chinese database contains a larger volume of training data; (2) Chinese is one of the seven languages in AP16-OLR; (3) Chinese is more similar to the remaining 6 target languages in comparison to English, as most of the languages in AP16-OLR are oriental languages.
Another observation is that the i-vector system outperforms the phonetic RNN systems in the AP16-OLR experiment, which is inconsistent with the observations in the Babel experiment, where both the phonetic systems, significantly outperform the i-vector system. This discrepancy can be attributed to the different data profiles of the two databases, with two possible key factors: (1) the utterances of AP16-OLR are longer than Babel, making the i-vector system more effective; (2) the speech signals of AP16-OLR are cleaner than those of Babel. The RNN system is more robust against noise, and this advantage is less prominent with clean data. We will examine the two conjectures in the following experiments.
AP16-OLR: utterance duration effect
To show the relative advantage of the RNN and the i-vector systems on utterances of different length, we select the utterances of at least 5 seconds from the AP16-OLR test set, and create 10 test sets by dividing them into small utterances of different durations, from $0.5$ seconds to 5 seconds, in steps of $0.5$ seconds. Each group contains $5,907$ utterances, and each utterance in a group is a random segment excerpted from the original utterance.
The performance of the i-vector and PTN systems on the 10 test sets are shown in Fig. 4 , in terms of $C_{avg}$ and EER respectively. It is clear that the PTN system is more effective on short utterances, and if the utterance duration is more than 3 seconds, the i-vector system is the best performer, especially in terms of EER.
The duration distribution of the test utterances of the Babel database and the AP16-OLR database are shown in Fig. 5 . It is clear that the test utterances are generally longer in AP16-OLR than in Babel. This explains why the relative performance of the i-vector system and the RNN system is inconsistent between the two databases.
AP16-OLR: noise robustness
Finally, we test the hypothesis that the RNN system is more robust against noise. Firstly white noise is added to the AP16-OLR test set at different SNR levels, and the noise-augmented data are tested on two systems: the i-vector baseline and the best performing PTN system from Table 7 , i.e. with CH-TDNN-ASR as the phonetic DNN. The results of these two systems with different levels of white noise are shown in Table 8 . It can be seen that the PTN system is more noise-robust: with more noise corruption, the gap between the i-vector system and the PTN system becomes less significant, and the PTN system is better than the i-vector system in terms of $C_{avg}$ when the noise level is high (SNR=10). This can be observed more clearly in Fig. 6 , where the performance degradation rates compared to the noise-free condition are shown. The figure shows that when the noise increases, the performance degradation with the PTN system is less significant compared to the degradation with the i-vector system. As the Babel speech data is much more noisy than the AP16-OLR speech, this noise robustness with the PTN approach partly explains why the relative performance is inconsistent between the two databases.
Conclusions
This paper proposed a phonetic temporal neural (PTN) approach for language identification. In this approach, acoustic features are substituted for phonetic features to build an RNN LID model. Our experiments conducted on the Babel and AP16-OLR databases demonstrated that the PTN approach can provide dramatic performance improvement over the baseline RNN LID system, with even better results than a phonetically aware approach that treats the phonetic feature as additional auxiliary information. This demonstrated that phonetic temporal information is much more informative than raw acoustic information for discriminating between languages. This was a long-standing belief of LID researchers in the PRLM era, but has been doubted since the increased popularity and utilization of the i-vector approach in recent years. Future work will improve the performance of the neural LID approach on long sentences, by enabling the LSTM-RNN to learn long-time patterns, e.g., by multi-scale RNNs BIBREF44 . | Unanswerable |
79a28839fee776d2fed01e4ac39f6fedd6c6a143 | 79a28839fee776d2fed01e4ac39f6fedd6c6a143_0 | Q: What is the main contribution of the paper?
Text: Introduction
Language identification (LID) lends itself to a wide range of applications, such as mixed-lingual (code-switching) speech recognition. Humans use many cues to discriminate languages, and better accuracy can be achieved with the use of more cues. Various LID approaches have been developed, based on different types of cues.
Cues for language identification
There are more than 5000 languages in the world, and each language has distinct properties at different levels, from acoustic to semantics BIBREF0 , BIBREF1 , BIBREF2 . A number of studies have investigated how humans use these properties as cues to distinguish between languages BIBREF3 . For example, Muthusamy BIBREF4 found that familiarity with a language is an important factor affecting LID accuracy, and that longer speech samples are easier to identify. Moreover, people can easily tell what cues they use for identification, including phonemic inventory, word usage, and prosody. More thorough investigations were conducted by others by modifying speech samples to promote one or several factors. For example, Mori et al. BIBREF5 found that people are able to identify Japanese and English fairly reliably even when phone information is reduced. They argued that other non-linguistic cues such as intensity and pitch were used to decide the language. Navratil BIBREF6 evaluated the importance of various types of knowledge, including lexical, phonotactic and prosodic, by asking humans to identify five languages, Chinese, English, French, German and Japanese. Subjects were presented with unaltered speech samples, samples with randomly altered syllables, and samples with the vocal-tract information removed to leave only the F0 and amplitude. Navratil found that the speech samples with random syllables are more difficult to identify compared to the original samples (73.9% vs 96%), and removing vocal-tract information leads to significant performance reduction (73.9% vs 49.4%). This means that with this 5-language LID task, the lexical and phonotactic information is important for human decision making.
The LID experiments summarised above suggest that languages can be discriminated by multiple cues at different levels, and the cues used to differentiate different language pairs are different. In general, the cues can be categorized into three levels: feature level, token level and prosody level. At the feature level, different languages have their own implementation of phones, and the transitions between phones are also different. This acoustic speciality is a short-time property and can be identified by certain spectral analysis and feature extraction of our auditory system. At the token level, the distribution and transition patterns of linguistic tokens at various levels are significantly different. The tokens can be phones/phonemes, syllables, words or even syntactic or semantic tags. At the prosody level, the duration, pitch and stress patterns often differ between languages. For example, patterns of stress can provide an important cue for discriminating between two stressed languages, duration can also be potentially useful, and the tone patterns of syllables or words offer a clear cue to discriminate between tonal languages.
LID approaches
Based on the different types of cues, multiple LID approaches have been proposed. Early work generally focused on feature-level cues. Feature-based methods use strong statistical models built on raw acoustic features to make the LID decision. For instance, Cimarusti used LPC features BIBREF7 , and Foil et al. BIBREF8 investigated formant features. Dynamic features that involve temporal information were also demonstrated to be effective BIBREF9 . The statistical models used include Gaussian mixture models (GMMs) BIBREF10 , BIBREF11 , hidden Markov models (HMMs) BIBREF12 , BIBREF13 , neural networks (NNs) BIBREF14 , BIBREF15 , and support vector machines (SVMs) BIBREF16 . More recently, a low-rank GMM model known as the i-vector model was proposed and achieved significant success BIBREF17 , BIBREF18 . This model constrains the mean vectors of the GMM components in a low-dimensional space to improve the statistical strength for model training, and uses a task-oriented discriminative model (e.g., linear discriminative analysis, LDA) to improve the decision quality at run-time, leading to improved LID performance. Due to the short-time property of the features, most feature-based methods model the distributional characters rather than the temporal characters of speech signals.
The token-based approach is based on the characters of high-level tokens. Since the dynamic properties of adjacent tokens are more stable than adjacent raw features, temporal characters can be learned with the token-based approach, in additional to the distributional characters. A typical approach is to convert speech signals into phone sequences, and then build an n-gram language model (LM) for each target language to evaluate the confidence that the input speech matches that language. This is the famous phone recognition and language modelling (PRLM) approach. Multiple PRLM variants have been proposed, such as parallel phone recognition followed by LM (PPRLM) BIBREF19 , BIBREF20 , and phone recognition on a multilingual phone set BIBREF21 . Other tokens such as syllables BIBREF22 and words BIBREF23 , BIBREF24 have also been investigated.
The prosody-based approach utilizes patterns of duration, pitch, and stress to discriminate between languages. For example, Foil et al. BIBREF8 studied formant and prosodic features and found formant features to be more discriminative. Rouas et al. BIBREF25 modeled pure prosodic features by GMMs and found that their system worked well on read speech, but could not deal with the complexity of spontaneous speech prosody. Muthusamy BIBREF15 used pitch variation, duration and syllable rate. Duration and pitch patterns were also used by Hazen BIBREF21 . In most cases, the prosodic information is used as additional knowledge to improve feature or token-based LID.
Most of the above methods, no matter what information is used, heavily rely on probabilistic models to accumulate evidence from a long speech segment. For example, the PRLM method requires an n-gram probability of the phonetic sequence, and the GMM/i-vector method requires the distribution of the acoustic feature. Therefore, these approaches require long test utterances, leading to inevitable latency in the LID decision. This latency is a serious problem for many practical applications, e.g., code-switching ASR, where multiple languages may be contained within a single block of speech. For quick LID, frame-level decision is highly desirable, which therefore cannot rely on probabilistic models.
The recently emerging deep learning approach solves this problem by using various deep neural networks (DNNs) to produce frame-level LID decisions. An early successful deep neural model was developed by Lopez-Moreno et al. BIBREF26 , who proposed an approach based on a feed-forward deep neural network (FFDNN), which accepts raw acoustic features and produces frame-level LID decisions. The score for utterance-based decision is calculated by averaging the scores of the frame-level decisions. This was extended by others with the use of various neural model structures, e.g., CNN BIBREF27 , BIBREF28 and TDNN BIBREF29 , BIBREF30 . These DNN models are feature-based, but they consider a large context window, and can therefore learn the feature's temporal information, which is not possible with conventional feature-based models (such as the i-vector model), that only learn distributional information. The temporal information can be better learned by recurrent neural networks (RNNs), as proposed by Gonzalez-Dominguez et al. BIBREF31 . Using an RNN structure based on the long-short term memory unit (LSTM), the authors reported better performance with fewer parameters. This RNN approach was further developed by others, e.g., BIBREF32 , BIBREF33 .
It should be noted that DNNs have been used in other ways in LID. For example, Song et al. BIBREF34 used a DNN to extract phonetic feature for the i-vector system, and Ferrer et al. BIBREF35 proposed a DNN i-vector approach that uses posteriors produced by a phone-discriminative FFDNN to compute the Baum-Welch statistics. Tian et al. BIBREF36 extended this by using an RNN to produce the posteriors. These methods all use neural models as part of the system, but their basic framework is still probabilistic, so they share the same problem of decision latency. In this paper, we focus on the pure neural approach that uses neural models as the basic framework, so that short-time language information can be learned by frame-level discriminative training.
Motivation of the paper
All the present neural LID methods are based on acoustic features, e.g., Mel filter banks (Fbanks) or Mel frequency cepstral coefficients (MFCCs), with phonetic information largely overlooked. This may have significantly hindered the performance of neural LID. Intuitively, it is a long-standing hypothesis that languages can be discriminated between by phonetic properties, either distributional or temporal; additionally, phonetic features represent information at a higher level than acoustic features, and so are more invariant with respect to noise and channels. Pragmatically, it has been demonstrated that phonetic information, either in the form of phone sequences, phone posteriors, or phonetic bottleneck features, can significantly improve LID accuracy in both the conventional PRLM approach BIBREF11 and the more modern i-vector system BIBREF34 , BIBREF35 , BIBREF36 . In this paper, we will investigate the utilization of phonetic information to improve neural LID. The basic concept is to use a phone-discriminative model to produce frame-level phonetic features, and then use these features to enhance RNN LID systems that were originally built with raw acoustic features. The initial step is therefore feature combination, with the phonetic feature used as auxiliary information to assist acoustic RNN LID. This is improved further, as additional research identified that a simpler model using only the phonetic feature as the RNN LID input provides even better performance. We call this RNN model based on phonetic features the phonetic temporal neural LID approach, or PTN LID. As well as having a simplified model structure, the PTN offers deeper insight into the LID task by rediscovering the value of the phonetic temporal property in language discrimination. This property was historically widely and successfully applied in token-based approaches, e.g., PRLM BIBREF11 , but has been largely overlooked due to the popularity of the i-vector approach.
Table 1 summarizes different systems that use deep neural models in LID. The probabilistic approach uses DNNs as part of a probabilistic system, e.g., GMM or i-vector, while the neural approach uses various types of DNNs as the decision architecture. Both approaches may use either acoustic features or phonetic features. The proposed PTN approach is at the bottom-right of the table.
Paper organization
The remainder of the paper is organized as follows: the model structures of the PTN approach will be presented in Section "Phonetic neural modelling for LID" , which is followed by the implementation details in Section "Model structure" . The experiments and results are reported in Section "Experiments" , and some conclusions and future work will be presented in Section "Conclusions" .
Phonetic neural modelling for LID
In this section, we present the models that employ phonetic information for RNN LID. Although the phonetically aware approach treats phonetic information as auxiliary knowledge, the PTN approach uses phonetic information as the only input into the RNN LID system. Both are depicted in Fig. 1 .
Phonetically aware acoustic neural model
The instinctive idea for utilizing phonetic information in the RNN LID system is to treat it as auxiliary knowledge, which we call a phonetically aware approach. Intuitively, this can be regarded as a knowledge-fusion method that uses both the phonetic and acoustic features to learn LID models. Fig. 1 (a) shows this model. A phonetic DNN model (this may be in any structure, such as FFDNN, RNN, TDNN) is used to produce frame-level phonetic features. These can be read from anywhere in the phonetic DNN, such as the output, or the last hidden layer, and then be propagated to the LID model, an LSTM-RNN in our study. This propagated phonetic information can be accepted by the LID model in different ways. For example, it can be part of the input, or as an additional term of the gate or non-linear activation functions.
Phonetic temporal neural model
The second model, which we call the PTN model, completely replaces the acoustic feature with the phonetic feature, and thus entirely relies on the properties of the phonetic representation. This learning is based on the RNN model, therefore the temporal patterns of the phonetic features can be learned. This PTN system is shown in Fig. 1 (b). Although the PTN model is a special, `aggressive' case of the phonetically aware approach, the success of this model offers a deeper insight into the LID task as it rediscovers the importance of the temporal properties of phonetic representations.
Understanding the PTN approach
The rationality of the PTN approach can be understood from two perspectives: the phonetic perspective, which relates to what information is important, and the transfer learning perspective, which relates to how this information is learned.
Phonetic perspective: The PTN approach adopts the long-standing hypothesis (as used by the PRLM model) that languages should be discriminated by phonetic rather than spectral properties. However this has been largely overlooked since the success of the i-vector approach, which achieved good performance using only raw acoustic features. However, Song et al. BIBREF34 recently rediscovered the value of phonetic features in the i-vector model. The PTN approach proposed here follows the same idea and rediscovers the value of phonetic features in the neural model. We argue that this value is more important for the neural model than for the probabilistic model (e.g., i-vector), as its decision is based on only a small number of frames, and thus requires that the feature involves more language-related information and less noise and uncertainties. The i-vector model, in contrast, can utilize more speech signals, hence can discover language-related information from the distributional patterns even with raw acoustic features.
Both the PTN approach and the historical token-based approach share the same idea of utilizing phonetic information and modelling the temporal patterns, but they are fundamentally different. Firstly, the phonetic information in the PTN approach is frame-level, while in conventional token-based methods this information is unit-level. Therefore, the PTN approach can represent phonetic properties at a higher temporal resolution. Secondly, conventional token-based methods represent phonetic information as sequences derived from phone recognition, while the PTN approach represents phonetic information as a feature vector that involves information contributed by all phones, and thus more detailed phonetic information is represented. Finally, the back-end model of the conventional token-based approach is an n-gram LM based on discrete tokens and trained with the maximum likelihood (ML) criterion, while the back-end model of the PTN approach is an RNN, which functions similarly to an RNN LM, but is based on continuous phonetic features, and trained with a task-oriented criterion that discriminates the target languages.
Transfer learning perspective: The second perspective to understand the PTN approach is from the transfer learning perspective BIBREF37 . It is well known that DNNs perform very well at learning task-oriented features from raw data. This is the hypothesis behind conventional acoustic RNN LID methods: if the neural model is successfully trained, it can learn any useful information from the raw acoustic features layer by layer, including the phonetic information. It therefore initially seems unnecessary to design our PTN phonetic feature learning and modelling architecture. However, we argue that using the language labels alone to learn LID-related information from raw acoustic features is highly ineffective, because these labels are too coarse to provide sufficient supervision. With the PTN model, feature extraction is trained on speech data labelled with phones or words which are highly informative and fine-grained (compared to language labels), leading to a strong DNN model for phonetic feature extraction. Importantly, phone discrimination and language identification are naturally correlated (from our phonetic perspective), which means that the phonetic features learned with the strong phone/word supervision involves rich information suitable for LID. This is an example of transfer learning, where a related task (i.e., phone discrimination) is used to learn features for another task (LID).
The PTN approach also involves another two transfer learning schemes: cross language and cross condition (databases). This means that the phonetic DNN can be learned with any speech data in any language. This property was identified in token-based LID BIBREF19 , however it is more important for the phonetic neural models, as training the phonetic DNN requires a large amount of speech data which is often not available for the target languages and the operating conditions under test. Moreover, it is also possible to train the phonetic DNN with multilingual, multi-conditional data BIBREF38 , resulting in robust and reliable phonetic feature extraction.
In summary, the PTN approach utilizes a detailed phonetic representation (DNN phonetic feature), and a powerful temporal model (LSTM-RNN) to capture the phonetic temporal properties of a language with a high temporal resolution. It also utilizes three types of transfer learning to ensure that the phonetic feature is representative and robust. Our PTN approach is therefore very powerful and flexible, and reconfirms the belief of many LID researchers that phonetic temporal information is highly valuable in language discrimination, not only for humans but also for machines.
Model structure
This section presents the details of the phonetic neural LID models, including both the phonetically aware model and the PTN model. The phonetic DNN can be implemented in various DNN structures, and here we choose the TDNN BIBREF39 which can learn long-term phonetic patterns and performed well in our experiments.
For the LID neural model, we choose the LSTM-RNN. One reason for this choice is that LSTM-RNN has been demonstrated to perform well in both the pure neural LID approach BIBREF31 and the neural-probabilistic hybrid LID approach BIBREF36 . Another reason is that the RNN model can learn the temporal properties of speech signals, which is in accordance with our motivation to model the phonetic dynamics, as in the conventional PRLM approach BIBREF20 . We first describe the LSTM-RNN structure used for LID, and then present the model structures of the phonetically aware acoustic RNN model and PTN model.
LSTM-RNN LID
The LSTM-RNN model used in this study is a one-layer RNN model, where the hidden units are LSTM. The structure proposed by Sak et al. BIBREF40 is used, as shown in Fig. 2 .
The associated computation is given as follows:
$$i_t &=& \sigma (W_{ix}x_{t} + W_{ir}r_{t-1} + W_{ic}c_{t-1} + b_i) \nonumber \\ f_t &=& \sigma (W_{fx}x_{t} + W_{fr}r_{t-1} + W_{fc}c_{t-1} + b_f) \nonumber \\ c_t &=& f_t \odot c_{t-1} + i_t \odot g(W_{cx}x_t + W_{cr}r_{t-1} + b_c) \nonumber \\ o_t &=& \sigma (W_{ox}x_t + W_{or}r_{t-1} + W_{oc}c_t + b_o) \nonumber \\ m_t &=& o_t \odot h(c_t) \nonumber \\ r_t &=& W_{rm} m_t \nonumber \\ p_t &=& W_{pm} m_t \nonumber \\ y_t &=& W_{yr}r_t + W_{yp}p_t + b_y \nonumber $$ (Eq. 13)
In the above equations, the $W$ terms denote weight matrices, and those associated with the cells were constrained to be diagonal in our implementation. The $b$ terms denote bias vectors. $x_t$ and $y_t$ are the input and output symbols respectively; $i_t$ , $f_t$ , $o_t$ represent the input, forget and output gates, respectively; $c_t$ is the cell and $m_t$ is the cell output. $r_t$ and $b$0 are two output components derived from $b$1 , where $b$2 is recurrent and fed to the next time step, while $b$3 is not recurrent and contributes to the present output only. $b$4 is the logistic sigmoid function, and $b$5 and $b$6 are non-linear activation functions, chosen to be hyperbolic. $b$7 denotes element-wise multiplication.
In this study, the LSTM layer consists of $1,024$ cells, and the dimensionality of both the recurrent and non-recurrent projections is set to 256. The natural stochastic gradient descent (NSGD) algorithm BIBREF41 was employed to train the model. During the training and decoding, the cells were reset for each 20 frames to ensure only short-time patterns are learned.
Phonetically aware neural LID
In the phonetically aware model, the phonetic feature is read from the phonetic DNN and is propagated to the LID RNN as additional information to assist the acoustic neural LID. The phonetic feature can be read either from the output (phone posterior) or the last hidden layer (logits), and can be propagated to different components of the RNN LID model, e.g., the input/forget/output gates and/or the non-linear activation functions.
Fig. 3 (a) illustrates a simple configuration, where the phonetic DNN is a TDNN model, and the feature is read from the last hidden layer. The phonetic feature is propagated to the non-linear function $g(\cdot )$ . With this configuration, calculation of the LID RNN is similar, except that the cell value should be updated as follows: $ c_t = f_t \odot c_{t-1} + i_t \odot g(W_{cx}x_t + W_{cr}r_{t-1} + \underline{W^{\prime }_{c\phi }\phi _{t}} + b_c) $
where $\phi _t$ is the phonetic feature obtained from the phonetic DNN.
Phonetic temporal neural (PTN) LID
The phonetically aware acoustic RNN model is an acoustic-based approach, with the phonetic feature used as auxiliary information. In contrast, the PTN approach assumes that the phonetic temporal properties cover most of the information for language discrimination, so the acoustic feature is not important any more. Therefore, it removes all acoustic features and uses the phonetic features as the only input of the LID RNN, as shown in Fig. 3 (b).
It is interesting to compare the PTN approach with other LID approaches. Firstly, it can be regarded as a new version of the conventional PRLM approach, particularly the recent PRLM implementation using RNN as the LM BIBREF42 . The major difference is that the PTN approach uses frame-level phonetic features while the PRLM approach uses token-level phonetic sequences; in addition, the phonetic information in the PTN approach is much richer than for PRLM, as it is represented as a continuous phonetic vector rather than discrete phonetic symbols.
The PTN approach is also correlated to the neural-probabilistic hybrid approach, where the phonetic DNN is used to produce phonetic features, from which the GMM or i-vector model is constructed. The PTN approach uses the same phonetic features, but employs an RNN model to describe the dynamic property of the feature, instead of modelling the distributional property using GMM or i-vector models. As will be discussed in the next section, temporal modelling is very important for phonetic neural models.
Finally, compared to the conventional acoustic RNN LID model, the PTN model uses phonetic features rather than acoustic features. Since the phonetic features can be learned with a very large speech database, they are much more robust against noise and uncertainties (e.g., speaker traits and channel distortions) than the raw acoustic features. This suggests that the PTN approach is more robust against noise than the conventional acoustic RNN approach.
Databases and configurations
The experiments were conducted on two databases: the Babel database and the AP16-OLR database. The Babel database was collected as part of the IARPA (Intelligence Advanced Research Projects Activity) Babel program, which aimed to develop speech technologies for low-resource languages. The sampling rate is 8 kHz and the sample size is 16 bits. In this paper, we chose speech data from seven languages in the Babel database: Assamese, Bengali, Cantonese, Georgian, Pashto Tagalog and Turkish. For each language, an official training and development dataset were provided. The training datasets contain both conversational and scripted speech, and the development datasets only contain conversational speech. We used the entire training set of each language for model training, but randomly selected $2,000$ utterances from the development set of each language to perform testing.
The training data sets from the seven languages are as follows: Assamese 75 hours, Bengali 87 hours, Cantonese 175 hours, Georgian 64 hours, Pashto 111 hours, Tagalog 116 hours and Turkish 107 hours. The average duration of the test utterances is $4.15$ seconds, ranging from $0.19$ seconds to $30.85$ seconds.
The AP16-OL7 database was originally created by Speechocean Inc., targeted towards various speech processing tasks (mainly speech recognition), and was used as the official data for the AP16-OLR LID challenge. The database contains seven datasets, each in a particular language. These are: Mandarin, Cantonese, Indonesian, Japanese, Russian, Korean and Vietnamese. The data volume for each language is approximately 10 hours of speech signals recorded by 24 speakers (12 males and 12 females), with each speaker recording approximately 300 utterances in reading style by mobile phones, with a sampling rate of 16kHz and a sample size of 16 bits. Each dataset was split into a training set consisting of 18 speakers, and a test set consisting of 6 speakers. For Mandarin, Cantonese, Vietnamese and Indonesian, the recording was conducted in a quiet environment. For Russian, Korean and Japanese, there are 2 recording conditions for each speaker, quiet and noisy. The average duration (including silence) of all the $12,939$ test utterances of the seven languages is $4.74$ seconds, ranging from $1.08$ seconds to $18.06$ seconds.
The phonetic DNN is a TDNN structure, and the LID model is based on the LSTM-RNN. The raw feature used for those models consists of 23-dimensional Fbanks, with a symmetric 2-frame window for RNN and a symmetric 4-frame window for TDNN to splice neighboring frames. All the experiments were conducted with Kaldi BIBREF43 . The default configurations of the Kaldi WSJ s5 nnet3 recipe were used to train the phonetic DNN and the LID RNN. We first report experiments based on the Babel database, and then experiments with the AP16-OLR database.
Babel: baseline of bilingual LID
As the first step, we build three baseline LID systems, one based on the i-vector model, and the other two based on LSTM-RNN, using the speech data of two languages from Babel: Assamese and Georgian (AG).
For the i-vector baseline, the UBM involves $2,048$ Gaussian components and the dimensionality of the i-vectors is 400. The static acoustic features consists of 12-dimensional MFCCs and the log energy. These static features are augmented by their first and second order derivatives, resulting in 39-dimensional feature vectors. In our experiment, we train an SVM for each language to determine the score of a test i-vector belonging to that language. The SVMs are trained on the i-vectors of all training segments, following the one-versus-rest strategy.
The two RNN LID baselines are: a standard RNN LID system (AG-RNN-LID) that discriminates between the two languages in its output, and a multi-task system (AG-RNN-MLT) that was trained to discriminate between the two languages as well as the phones. More precisely, the output units of the AG-RNN-MLT are separated into two groups: an LID group that involves two units corresponding to Assamese and Georgian respectively, and an ASR group that involves $3,349$ bilingual senones that are inherited from an HMM/GMM ASR system trained with the speech data of Assamese and Georgian, following the standard WSJ s5 HMM/GMM recipe of Kaldi. The WSJ s5 nnet3 recipe of Kaldi is then used to train the AG-RNN-LID and AG-RNN-MLT systems.
The LID task can be conducted by either AG-RNN-LID or AG-RNN-MLT (using the LID output group) at the frame-level (denoted as `Fr.'), using the frame-level language posteriors they produce. To evaluate the utterance-level (denoted as `Utt.') performance, the frame-level posteriors are averaged to form the utterance-level posterior, by which the language decision can be made.
The performance results with the three baseline systems, in terms of $C_{avg}$ and equal error rate (EER), are shown in Table 2 . The results indicate that both the LID RNN and the multi-task LID RNN are capable of language discrimination, and the multi-task RNN significantly outperforms both the LID RNN and the i-vector baseline. This indicates that the phone information is very useful for neural LID, even if simply used as an auxiliary objective in the model training, hence supporting our transfer learning perspective, as described in Section "Phonetic neural modelling for LID" .
The multi-task learning approach is an interesting way to involve phonetic information in LID. However, it has the limitation of requiring the training data to be labelled in both languages and words/phones. This is very costly and not feasible in most scenarios. The phonetic neural models (the phonetically aware model and the PTN model) do not suffer from this problem.
Babel: phonetically aware bilingual LID
The phonetically aware architecture uses phonetic features as auxiliary information to improve the RNN LID. We experimented with various architectures for the phonetic DNN, and found that the TDNN structure is a good choice. In this experiment, the TDNN structure is composed of 6 time-delay layers, with each followed by a p-norm layer that reduces the dimensionality of the activation from $2,048$ to 256, the same dimension as the recurrent layer of the LID LSTM-RNN. The activations of the last hidden layer in the TDNN are read out as the phonetic feature.
Two TDNN models are trained. The AG-TDNN-MLT model is a multi-task model trained with the Assamese and Georgian data, and there are two groups of output targets, phone labels and language labels. The ASR performance (WER) of the AG-TDNN-MLT model is $66.4\%$ and $64.2\%$ for Assamese and Georgian respectively. The SWB-TDNN-ASR model is an ASR model trained with the Switchboard database. This database involves 317 hours of telephone speech signals in English, recorded from $4,870$ speakers. The ASR performance (WER) of SWB-TDNN-ASR is $20.8\%$ on the Eval2000 dataset.
Another design decision that had to be made was to choose which component in the LID RNN will receive the phonetic information. After a series of preliminary experiments, it was found that the $g$ function is the best receiver. With this choice and the two TDNN phonetic DNNs, we therefore build the phonetically aware LID system. The results are shown in Table 3 . Several conclusions can be obtained from the results.
The phonetically aware system significantly outperforms the baseline RNN LID system (second row of the results in Table 2 ). This suggests that involving phonetic information with RNN LID has clear benefits.
The phonetically aware system significantly outperforms the multi-task RNN LID (third row of the results in Table 2 ). Note that in the multi-task RNN LID, the phonetic knowledge is used as an auxiliary task to assist the LID RNN training and has shown great benefits. The advantages of the phonetically aware system demonstrated that using the phonetic knowledge to produce phonetic features seems to be a better method than using the knowledge to directly assist model training.
The phonetic DNN trained with Assamese and Georgian data (AG-TDNN-MLT) shows better performance than the one trained with the Switchboard dataset (SWB-TDNN-ASR). This is not surprising as Assamese and Georgian are the two languages chosen to discriminate between in the experiments presented in this section, so AG-TDNN-MLT is more consistent with this LID task. Nevertheless, it is still highly interesting to observe that clear benefits can be obtained by using phonetic features produced by SWB-TDNN-ASR, which is trained with a completely irrelevant dataset, in terms of both languages and environmental conditions. This confirmed our transfer learning perspective theory (as discussed previously), and demonstrated that phonetic features are largely portable and the phonetic DNN can be trained with any data in any languages. This observation is particularly interesting for LID tasks on low-resource languages, as the phonetic DNN can be trained with data from any rich-resource languages.
Babel: PTN for bilingual LID
In the above experiments, the phonetic feature is used as auxiliary information. Here, we evaluate the PTN architecture where the phonetic feature entirely replaces the acoustic features (Fbanks). The experiment is conducted with two phonetic DNN models: AG-TDNN-MLT and SWB-TDNN-ASR.
The results are presented in Table 4 . We first observe that the PTN systems perform as well as the best phonetically aware system in Table 3 , and even better in terms of the utterance-level EER. For better comparison, we also test the special case of the phonetically aware RNN LID (Ph. Aware), where both the phonetic and acoustic features are used as the LID RNN input (Ph+Fb). This is the same as the PTN model, but involves additional acoustic features. The results are shown in the second group of Table 4 . It can be seen that this feature combination does not provide any notable improvement to the results. This means that the phonetic feature is sufficient to represent the distinctiveness of each language, in accordance with our argument that language characters are mostly phonetic.
We also attempted to use the TDNN as the LID model (replacing the RNN) to learn static (rather than temporal) patterns of the phonetic features. We found that this model failed to converge. The same phenomenon was also observed in the AP16-OLR experiment (which will be discussed later in the paper). This is an important observation and it suggests that, with the phonetic feature, only the temporal properties are informative for language discrimination.
Babel: Phonetic knowledge or deep structure?
The good performance using only the phonetic features (i.e. the PTN approach) leads to the question of how this performance advantage in comparison to the RNN LID baseline is obtained. This paper has discussed the phonetic and transfer learning perspectives, which jointly state that the main advantage of PTN is the phonetic knowledge learned through transfer learning. However, another possible reason is that the deeper architecture consisting of both the phonetic DNN and the LID RNN may help to learn more abstract features. If the latter reason is more important, than a similar deep structure with only the LID labels can work similarly well. To answer this question, we design the following three experiments to test the contributions to the results from phonetic information (transfer learning) and deep architecture (deep learning):
TDNN-LSTM. The phonetic DNN, TDNN in the experiment, is initialized randomly and trained together with the LID RNN. This means that the TDNN is not trained with ASR labels, but as part of the LID neural model, and is trained end-to-end.
Pre-trained TDNN-LSTM. The same as TDNN-LSTM, except that the TDNN is initialized by AG-TDNN-MLT.
3-layer LSTM-RNN. The 1-layer LSTM-RNN LID model may be not strong enough to learn useful information from acoustic features, hence leading to the suboptimal performance in Table 2 . We experiment with a 3-layer LSTM-RNN LID system to test if a simple deeper network can obtain the same performance as with the phonetic feature.
The results of these three deep models are shown in Table 5 . The TDNN-LSTM model completely fails. Using the phonetic TDNN as the initialization helps the training, but the results are worse than directly using the phonetic model. This means that the phonetic feature is almost optimal, and does not require any further LID-oriented end-to-end training. Finally, involving more LSTM layers (3-layer LSTM-RNN) does improve the performance a little when compared to the one-layer LSTM baseline ( $7.70$ vs $9.20$ , ref. to Table 2 ). These results indicate that the improvement with the PTN architecture is mainly due to the phonetic information it has learned from the ASR-oriented training (sometimes by multi-task learning), rather than the deep network structure. In other words, it is the transfer learning instead of deep learning that improves LID performance with the PTN architecture.
Babel: PTN on seven languages
We evaluate various LID models on the seven languages of the Babel database. First, the i-vector and LSTM-RNN LID baselines are presented. For the i-vector system, linear discriminative analysis (LDA) is employed to promote language-related information before training SVMs. The dimensionality of the LDA projection space is set to 6. For the phonetically aware RNN and the PTN systems, two phonetic DNNs are evaluated, AG-TDNN-MLT and SWB-TDNN-ASR. For the phonetically aware system, the $g$ function of the LSTM-RNN LID model is chosen as the receiver. The results are shown in Table 6 . It can be seen that both the phonetically aware and the PTN systems outperform the i-vector baseline and the acoustic RNN LID baseline, and that the PTN system with the AG-TDNN-MLT phonetic DNN performs the best. The SWB-TDNN-ASR performs slightly worse than AG-TDNN-MLT, indicating that familiarity with the language and the environment is beneficial when discriminating between languages. However, phonetic DNNs trained with data in foreign languages and in mismatched environment conditions (e.g., SWB-TDNN-ASR) still work well.
AP16-OLR: PTN on seven languages
In this section, we test the phonetic RNN LID approach on the AP16-OLR database. Compared to the Babel database, the speech signals in AP16-OLR are broadband (sampling rate of 16k Hz), and the acoustic environment is less noisy. Additionally, the speech data of each language is much more limited (10 hours per language), so we assume that training a phonetic DNN model is not feasible with the data of the target languages. We therefore utilize transfer learning, i.e., using phonetic DNNs trained on data in other languages.
All the test conditions are the same as in the 7 language Babel experiment. We trained two phonetic DNNs: one is a TDNN model of the same size as the AG-TDNN-ASR model in Section "Babel: phonetically aware bilingual LID" , but trained on the WSJ database, denoted by `WSJ-TDNN-ASR'. The other is also a TDNN, but is taken from an industry project, trained on a speech database involving $10,000$ hours of Chinese speech signals with 40 dimensional Fbanks. The network contains 7 rectifier TDNN layers, each containing $1,200$ hidden units. This model is denoted by `CH-TDNN-ASR'. The weight matrix of the last hidden layer in CH-TDNN-ASR is decomposed by SVD, where the low rank is set to 400. The 400-dimensional activations are read from the low-rank layer and are used as the phonetic feature.
The test results on the seven languages in the database are shown in Table 7 . It can be seen that the phonetic RNN LID models, either the phonetically aware RNN or the PTN approach, significantly outperform the acoustic RNN baseline system. The PTN system seems much more effective, which differs from the Babel database results. This may be attributed to the limited training data, so the simpler PTN architecture is preferred. Comparing the WSJ-based phonetic DNN and the Chinese phonetic DNN, the Chinese model is better. This may be attributed to several reasons: (1) the Chinese database contains a larger volume of training data; (2) Chinese is one of the seven languages in AP16-OLR; (3) Chinese is more similar to the remaining 6 target languages in comparison to English, as most of the languages in AP16-OLR are oriental languages.
Another observation is that the i-vector system outperforms the phonetic RNN systems in the AP16-OLR experiment, which is inconsistent with the observations in the Babel experiment, where both the phonetic systems, significantly outperform the i-vector system. This discrepancy can be attributed to the different data profiles of the two databases, with two possible key factors: (1) the utterances of AP16-OLR are longer than Babel, making the i-vector system more effective; (2) the speech signals of AP16-OLR are cleaner than those of Babel. The RNN system is more robust against noise, and this advantage is less prominent with clean data. We will examine the two conjectures in the following experiments.
AP16-OLR: utterance duration effect
To show the relative advantage of the RNN and the i-vector systems on utterances of different length, we select the utterances of at least 5 seconds from the AP16-OLR test set, and create 10 test sets by dividing them into small utterances of different durations, from $0.5$ seconds to 5 seconds, in steps of $0.5$ seconds. Each group contains $5,907$ utterances, and each utterance in a group is a random segment excerpted from the original utterance.
The performance of the i-vector and PTN systems on the 10 test sets are shown in Fig. 4 , in terms of $C_{avg}$ and EER respectively. It is clear that the PTN system is more effective on short utterances, and if the utterance duration is more than 3 seconds, the i-vector system is the best performer, especially in terms of EER.
The duration distribution of the test utterances of the Babel database and the AP16-OLR database are shown in Fig. 5 . It is clear that the test utterances are generally longer in AP16-OLR than in Babel. This explains why the relative performance of the i-vector system and the RNN system is inconsistent between the two databases.
AP16-OLR: noise robustness
Finally, we test the hypothesis that the RNN system is more robust against noise. Firstly white noise is added to the AP16-OLR test set at different SNR levels, and the noise-augmented data are tested on two systems: the i-vector baseline and the best performing PTN system from Table 7 , i.e. with CH-TDNN-ASR as the phonetic DNN. The results of these two systems with different levels of white noise are shown in Table 8 . It can be seen that the PTN system is more noise-robust: with more noise corruption, the gap between the i-vector system and the PTN system becomes less significant, and the PTN system is better than the i-vector system in terms of $C_{avg}$ when the noise level is high (SNR=10). This can be observed more clearly in Fig. 6 , where the performance degradation rates compared to the noise-free condition are shown. The figure shows that when the noise increases, the performance degradation with the PTN system is less significant compared to the degradation with the i-vector system. As the Babel speech data is much more noisy than the AP16-OLR speech, this noise robustness with the PTN approach partly explains why the relative performance is inconsistent between the two databases.
Conclusions
This paper proposed a phonetic temporal neural (PTN) approach for language identification. In this approach, acoustic features are substituted for phonetic features to build an RNN LID model. Our experiments conducted on the Babel and AP16-OLR databases demonstrated that the PTN approach can provide dramatic performance improvement over the baseline RNN LID system, with even better results than a phonetically aware approach that treats the phonetic feature as additional auxiliary information. This demonstrated that phonetic temporal information is much more informative than raw acoustic information for discriminating between languages. This was a long-standing belief of LID researchers in the PRLM era, but has been doubted since the increased popularity and utilization of the i-vector approach in recent years. Future work will improve the performance of the neural LID approach on long sentences, by enabling the LSTM-RNN to learn long-time patterns, e.g., by multi-scale RNNs BIBREF44 . | Proposing an improved RNN model, the phonetic temporal neural LID approach, based on phonetic features that results in better performance |
da2b43d7d048f3f59adf26a67ce66bd2d8a06326 | da2b43d7d048f3f59adf26a67ce66bd2d8a06326_0 | Q: What training settings did they try?
Text: Introduction
Recurrent Neural Networks (RNNs) are powerful machine learning models that can capture and exploit sequential data. They have become standard in important natural language processing tasks such as machine translation BIBREF0 , BIBREF1 and speech recognition BIBREF2 . Despite the ubiquity of various RNN architectures in natural language processing, there still lies an unanswered fundamental question: What classes of languages can, empirically or theoretically, be learned by neural networks? This question has drawn much attention in the study of formal languages, with previous results on both the theoretical BIBREF3 , BIBREF4 and empirical capabilities of RNNs, showing that different RNN architectures can learn certain regular BIBREF5 , BIBREF6 , context-free BIBREF7 , BIBREF8 , and context-sensitive languages BIBREF9 .
In a common experimental setup for investigating whether a neural network can learn a formal language, one formulates a supervised learning problem where the network is presented one character at a time and predicts the next possible character(s). The performance of the network can then be evaluated based on its ability to recognize sequences shown in the training set and – more importantly – to generalize to unseen sequences. There are, however, various methods of evaluation in a language learning task. In order to define the generalization of a network, one may consider the length of the shortest sequence in a language whose output was incorrectly produced by the network, or the size of the largest accepted test set, or the accuracy on a fixed test set BIBREF10 , BIBREF11 , BIBREF9 , BIBREF12 . These formulations follow narrow and bounded evaluation schemes though: They often define a length threshold in the test set and report the performance of the model on this fixed set.
We acknowledge three unsettling issues with these formulations. First, the sequences in the training set are usually assumed to be uniformly or geometrically distributed, with little regard to the nature and complexity of the language. This assumption may undermine any conclusions drawn from empirical investigations, especially given that natural language is not uniformly distributed, an aspect that is known to affect learning in modern RNN architectures BIBREF13 . Second, in a test set where the sequences are enumerated by their lengths, if a network makes an error on a sequence of, say, length 7, but correctly recognizes longer sequences of length up to 1000, would we consider the model's generalization as good or bad? In a setting where we monitor only the shortest sequence that was incorrectly predicted by the network, this scheme clearly misses the potential success of the model after witnessing a failure, thereby misportraying the capabilities of the network. Third, the test sets are often bounded in these formulations, making it challenging to compare and contrast the performance of models if they attain full accuracy on their fixed test sets.
In the present work, we address these limitations by providing a more nuanced evaluation of the learning capabilities of RNNs. In particular, we investigate the effects of three different aspects of a network's generalization: data distribution, length-window, and network capacity. We define an informative protocol for assessing the performance of RNNs: Instead of training a single network until it has learned its training set and then evaluating it on its test set, as BIBREF9 do in their study, we monitor and test the network's performance at each epoch during the entire course of training. This approach allows us to study the stability of the solutions reached by the network. Furthermore, we do not restrict ourselves to a test set of sequences of fixed lengths during testing. Rather, we exhaustively enumerate all the sequences in a language by their lengths and then go through the sequences in the test set one by one until our network errs $k$ times, thereby providing a more fine-grained evaluation criterion of its generalization capabilities.
Our experimental evaluation is focused on the Long Short-Term Memory (LSTM) network BIBREF14 , a particularly popular RNN variant. We consider three formal languages, namely $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , and investigate how LSTM networks learn these languages under different training regimes. Our investigation leads to the following insights: (1) The data distribution has a significant effect on generalization capability, with discrete uniform and U-shaped distributions often leading to the best generalization amongst all the four distributions in consideration. (2) Widening the training length-window, naturally, enables LSTM models to generalize better to longer sequences, and interestingly, the networks seem to learn to generalize to shorter sequences when trained on long sequences. (3) Higher model capacity – having more hidden units – leads to better stability, but not necessarily better generalization levels. In other words, over-parameterized models are more stable than models with theoretically sufficient but far fewer parameters. We explain this phenomenon by conjecturing that a collaborative counting mechanism arises in over-parameterized networks.
Related Work
It has been shown that RNNs with a finite number of states can process regular languages by acting like a finite-state automaton using different units in their hidden layers BIBREF5 , BIBREF6 . RNNs, however, are not limited to recognizing only regular languages. BIBREF3 and BIBREF4 showed that first-order RNNs (with rational state weights and infinite numeric precision) can simulate a pushdown automaton with two-stacks, thereby demonstrating that RNNs are Turing-complete. In theory, RNNs with infinite numeric precision are capable of expressing recursively enumerable languages. Yet, in practice, modern machine architectures do not contain computational structures that support infinite numeric precision. Thus, the computational power of RNNs with finite precision may not necessarily be the same as that of RNNs with infinite precision.
BIBREF7 investigated the learning capabilities of simple RNNs to process and formalize a context-free grammar containing hierarchical (recursively embedded) dependencies: He observed that distinct parts of the networks were able to learn some complex representations to encode certain grammatical structures and dependencies of the context-free grammar. Later, BIBREF8 introduced an RNN with an external stack memory to learn simple context-free languages, such as $a^n b^m$ , $a^nb^ncb^ma^m$ , and $a^{n+m} b^n c^m$ . Similar studies BIBREF15 , BIBREF16 , BIBREF17 , BIBREF10 , BIBREF11 have explored the existence of stable counting mechanisms in simple RNNs, which would enable them to learn various context-free and context-sensitive languages, but none of the RNN architectures proposed in the early days were able to generalize the training set to longer (or more complex) test samples with substantially high accuracy.
BIBREF9 , on the other hand, proposed a variant of Long Short-Term Memory (LSTM) networks to learn two context-free languages, $a^n b^n$ , $a^n b^m B^m A^n$ , and one strictly context-sensitive language, $a^n b^n c^n$ . Given only a small fraction of samples in a formal language, with values of $n$ (and $m$ ) ranging from 1 to a certain training threshold $N$ , they trained an LSTM model until its full convergence on the training set and then tested it on a more generalized set. They showed that their LSTM model outperformed the previous approaches in capturing and generalizing the aforementioned formal languages. By analyzing the cell states and the activations of the gates in their LSTM model, they further demonstrated that the network learns how to count up and down at certain places in the sample sequences to encode information about the underlying structure of each of these formal languages.
Following this approach, BIBREF19 and BIBREF20 studied the stability of the LSTM networks in learning context-free and context-sensitive languages and examined the processing mechanism developed by the hidden states during the training phase. They observed that the weight initialization of the hidden states in the LSTM network had a significant effect on the inductive capabilities of the model and that the solutions were often unstable in the sense that the numbers up to which the LSTM models were able to generalize using the training dataset sporadically oscillated.
The Sequence Prediction Task
Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\dashv $ , in addition to the symbols in $\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions. During testing, we use an output threshold criterion of $0.5$ for the sigmoid output layer to indicate which characters were predicted by the model. We then turn this prediction task into a classification task by accepting a sample if our model predicts all of its output values correctly and rejecting it otherwise.
Languages
We consider the following three formal languages in our predictions tasks: $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , where $n \ge 1$ . Of these three languages, the first one is a context-free language and the last two are strictly context-sensitive languages. Table 1 provides example input-output pairs for these languages under the sequence prediction task. In the rest of this section, we formulate the sequence prediction task for each language in more detail.
The input vocabulary $\mathcal {V}^{(i)}$ for $a^n b^n$ consists of $a$ and $b$ . The output vocabulary $\mathcal {V}^{(o)}$ is the union of $\mathcal {V}^{(i)}$ and $\lbrace \dashv \rbrace $ . Therefore, the input vectors are 2-dimensional, and the output vectors are 3-dimensional. Before the occurrence of the first $b$ in a sequence, the model always predicts $a$ or $b$ (which we notate $a^n b^n$0 ) whenever it sees an $a^n b^n$1 . However, after it encounters the first $a^n b^n$2 , the rest of the sequence becomes entirely deterministic: Assuming that the model observes $a^n b^n$3 $a^n b^n$4 's in a sequence, it outputs $a^n b^n$5 $a^n b^n$6 's for the next $a^n b^n$7 $a^n b^n$8 's and the terminal symbol $a^n b^n$9 for the last $a$0 in the sequence. Summarizing, we define the input-target scheme for $a$1 as follows:
$$a^n b^n \Rightarrow (a/b)^n b^{n-1}\dashv $$ (Eq. 8)
The input vocabulary $\mathcal {V}^{(i)}$ for $a^n b^n c^n$ consists of three characters: $a$ , $b$ , and $c$ . The output vocabulary $\mathcal {V}^{(o)}$ is $\mathcal {V}^{(i)} \cup \lbrace \dashv \rbrace $ . The input and output vectors are 3- and 4-dimensional, respectively. The input-target scheme for $a^n b^n c^n$ is:
$$a^n b^n c^n\Rightarrow (a/b)^{n}b^{n-1}c^{n}\dashv $$ (Eq. 10)
The vocabulary $\mathcal {V}^{(i)}$ for the last language $a^n b^n c^n d^n$ consists of $a$ , $b$ , $c$ , and $d$ . The input vectors are 4-dimensional, and the output vectors are 5-dimensional. As in the case of the previous two languages, a sequence becomes entirely deterministic after the observance of the first $b$ , hence the input-target scheme for $a^n b^n c^n d^n$ is:
$$a^n b^n c^n d^n\Rightarrow (a/b)^n b^{n-1} c^n d^{n}\dashv $$ (Eq. 12)
The LSTM Model
We use a single-layer LSTM model to perform the sequence prediction task, followed by a linear layer that maps to the output vocabulary size. The linear layer is followed by a sigmoid unit layer. The loss is the sum of the mean squared error between the prediction and the correct output at each character. See Figure 1 for an illustration. In our implementation, we used the standard LSTM module in PyTorch BIBREF22 and initialized the initial hidden and cell states, $h_0$ and $c_0$ , to zero.
Training and Testing
Training and testing are done in alternating steps: In each epoch, for training, we first present to an LSTM network 1000 samples in a given language, which are generated according to a certain discrete probability distribution supported on a closed finite interval. We then freeze all the weights in our model, exhaustively enumerate all the sequences in the language by their lengths, and determine the first $k$ shortest sequences whose outputs the model produces inaccurately. We remark, for the sake of clarity, that our test design is slightly different from the traditional testing approaches used by BIBREF10 , BIBREF9 , BIBREF12 , since we do not consider the shortest sequence in a language whose output was incorrectly predicted by the model, or the largest accepted test set, or the accuracy of the model on a fixed test set.
Our testing approach, as we will see shortly in the following subsections, gives more information about the inductive capabilities of our LSTM networks than the previous techniques and proves itself to be useful especially in the cases where the distribution of the length of our training dataset is skewed towards one of the boundaries of the distribution's support. For instance, LSTM models sometimes fail to capture some of the short sequences in a language during the testing phase, but they then predict a large number of long sequences correctly. If we were to report only the shortest sequence whose output our model incorrectly predicts, we would then be unable to capture the model's inductive capabilities. Furthermore, we test and report the performance of the model after each full pass of the training set. Finally, in all our investigations, we repeated each experiment ten times. In each trial, we only changed the weights of the hidden states of the model – all the other parameters were kept the same.
Length Distributions
Previous studies have examined various length distribution models to generate appropriate training sets for each formal language: BIBREF16 , BIBREF11 , BIBREF12 , for instance, used length distributions that were skewed towards having more short sequences than long sequences given a training length-window, whereas BIBREF9 used a uniform distribution scheme to generate their training sets. The latter briefly comment that the distribution of lengths of sequences in the training set does influence the generalization ability and convergence speed of neural networks, and mention that training sets containing abundant numbers of both short and long sequences are learned by networks much more quickly than uniformly distributed regimes. Nevertheless, they do not systematically compare or explicitly report their findings. To study the effect of various length distributions on the learning capability and speed of LSTM models, we experimented with four discrete probability distributions supported on bounded intervals (Figure 2 ) to sample the lengths of sequences for the languages. We briefly recall the probability distribution functions for discrete uniform and Beta-Binomial distributions used in our data generation procedure.
Given $N \in \mathbb {N}$ , if a random variable $X \sim U (1, N)$ , then the probability distribution function of $X$ is given as follows: $ P(x) = {\left\lbrace \begin{array}{ll} \frac{1}{N} & \text{if } x \in \lbrace 1, \ldots , N\rbrace \\ 0 & \text{otherwise.} \end{array}\right.} $
To generate training data with uniformly distributed lengths, we simply draw $n$ from $U (1, N)$ as defined above.
Similarly, given $N \in \mathbb {Z}^{\ge 0}$ and two parameters $\alpha $ and $ \beta \in \mathbb {R}^{>0}$ , if a random variable $X \sim \text{BetaBin} (N, \alpha , \beta )$ , then the probability distribution function of $X$ is given as follows: $ P(x) = {\left\lbrace \begin{array}{ll} \binom{N}{x} \frac{B(x+\alpha , N-x+\beta )}{B(\alpha , \beta )} & \text{if } x \in \lbrace 0, \ldots , N\rbrace \\ 0 & \text{otherwise.} \end{array}\right.} $
where $B(\alpha , \beta )$ is the Beta function. We set different values of $\alpha $ and $\beta $ as such in order to generate the following distributions:
U-shaped ( $\alpha = 0.25$ , $\beta = 0.25$ ): The probabilities of having short and long sequences are equally high, but the probability of having an average-length sequence is low.
Right-tailed ( $\alpha = 1$ , $\beta = 5$ ): Short sequences are more probable than long sequences.
Left-tailed ( $\alpha = 5$ , $\beta = 1$ ): Long sequences are more probable than short sequences.
Figure 3 exhibits the generalization graphs for the three formal languages trained with LSTM models under different length distribution regimes. Each single-color sequence in a generalization graph shows the average performance of ten LSTMs trained under the same settings but with different weight initializations. In all these experiments, the training sets had the same length-window $[1, 50]$ . On the other hand, we used 2, 3, and 4 hidden units in our LSTM architectures for the languages $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , respectively. The top three plots show the average lengths of the shortest sequences ( $e_1$ ) whose outputs were incorrectly predicted by the model at test time, whereas the bottom plots show the fifth such shortest lengths ( $e_5$ ). We note that the models trained on uniformly distributed samples seem to perform the best amongst all the four distributions in all the three languages. Furthermore, for the languages $a^n b^n c^n$ and $a^n b^n c^n d^n$ , the U-shaped Beta-Binomial distribution appears to help the LSTM models generalize better than the left- and right-tailed Beta Binomial distributions, in which the lengths of the samples are intentionally skewed towards one end of the training length-window.
When we look at the plots for the $e_1$ values, we observe that all the distribution regimes seem to facilitate learning at least up to the longest sequences in their respective training datasets, drawn by the light blue horizontal lines on the plots, except for the left-tailed Beta-Binomial distribution for which we see errors at lengths shorter than the training length threshold in the languages $a^n b^n c^n$ and $a^n b^n c^n d^n$ . For instance, if we were to consider only the $e_1$ values in our analysis, it would be tempting to argue that the model trained under the left-tailed Beta-Binomial distribution regime did not learn to recognize the language $a^n b^n c^n d^n$ . By looking at the $e_5$ values, in addition to the $e_1$ values, we however realize that the model was actually learning many of the sequences in the language, but it was just struggling to recognize and correctly predict the outputs of some of the short sequences in the language. This phenomenon can be explained by the under-representation of short sequences in left-tailed Beta-Binomial distributions. Our observation clearly emphasizes the significance of looking beyond $e_1$ , the shortest error length at test time, in order to obtain a more complete picture of the model's generalizing capabilities.
Length Windows
Most of the previous studies trained networks on sequences of lengths $n \in [1, N]$ , where typical $N$ values were between 10 and 50 BIBREF11 , BIBREF9 , and more recently 100 BIBREF23 . To determine the impact of the choice of training length-window on the stability and inductive capabilities of the LSTM networks, we experimented with three different length-windows for $n$ : $[1, 30]$ , $[1, 50]$ , and $[50, 100]$ . In the third window setting $[50, 100]$ , we further wanted to see whether LSTM are capable of generalizing to short sequences that are contained in the window range $[1, 50]$ , as well as to sequences that are longer than the sequences seen in the training set.
Model Capacity
It has been shown by BIBREF9 that LSTMs can learn $a^n b^n$ and $a^n b^n c^n$ with 1 and 2 hidden units, respectively. Similarly, BIBREF24 demonstrated that a simple RNN architecture containing a single hidden unit with carefully tuned parameters can develop a canonical linear counting mechanism to recognize the simple context-free language $a^n b^n$ , for $n \le 250$ . We wanted to explore whether the stability of the networks would improve with an increase in capacity of the LSTM model. We, therefore, varied the number of hidden units in our LSTM models as follows. We experimented with 1, 2, 3, and 36 hidden units for $a^n b^n$ ; 2, 3, 4, and 36 hidden units for $a^n b^n c^n$ ; and 3, 4, 5, and 36 hidden units for $a^n b^n c^n d^n$ . The 36 hidden unit case represents an over-parameterized network with more than enough theoretical capacity to recognize all these languages.
Training Length Windows
Figure 4 shows the generalization graphs for the three formal languages trained with LSTM models under different training windows. We note that enlarging the training length-window, naturally, enables an LSTM model to generalize far beyond its training length threshold. Besides, we see that the models with the training length-window of $[50, 100]$ performed slightly better than the other two window ranges in the case of $a^n b^n c^n$ (green line, bottom middle plot). Moreover, we acknowledge the capability of LSTMs to recognize longer sequences, as well as shorter sequences. For instance, when trained on the training length-window $[50, 100]$ , our models learned to recognize not only the longer sequences but also the shorter sequences not presented in the training sets for the languages $a^n b^n$ and $a^n b^n c^n$ .
Finally, we highlight the importance of the $e_5$ values once again: If we were to consider only the $e_1$ values, for instance, we would not have captured the inductive learning capabilities of the models trained with a length-window of $[50, 100]$ in the case of $a^n b^n c^n$ , since the models always failed at recognizing the shortest sequence $ab$ in the language. Yet, considering $e_5$ values helped us evaluate the performance of the LSTM models more accurately.
Number of Hidden Units
There seems to be a positive correlation between the number of hidden units in an LSTM network and its stability while learning a formal language. As Figure 5 demonstrates, increasing the number of hidden units in an LSTM network both increases the network's stability and also leads to faster convergence. However, it does not necessarily result in a better generalization. We conjecture that, with more hidden units, we simply offer more resources to our LSTM models to regulate their hidden states to learn these languages. The next section supports this hypothesis by visualizing the hidden state activations during sequence processing.
Discussion
In addition to the analysis of our empirical results in the previous section, we would like to touch upon two important characteristics of LSTM models when they learn formal languages, namely the convergence issue and counting behavior of LSTM models.
Conclusion
In this paper, we have addressed the influence of various length distribution regimes and length-window sizes on the generalizing ability of LSTMs to learn simple context-free and context-sensitive languages, namely $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ . Furthermore, we have discussed the effect of the number of hidden units in LSTM models on the stability of a representation learned by the network: We show that increasing the number of hidden units in an LSTM model improves the stability of the network, but not necessarily the inductive power. Finally, we have exhibited the importance of weight initialization to the convergence of the network: Our results indicate that different hidden weight initializations can yield different convergence values, given that all the other parameters are unchanged. Throughout our analysis, we emphasized the importance of a fine-grained evaluation, considering generalization beyond the first error and during training. We therefore concluded that there are an abundant number of parameters that can influence the inductive ability of an LSTM to learn a formal language and that the notion of learning, from a neural network's perspective, should be treated carefully.
Acknowledgment
The first author gratefully acknowledges the support of the Harvard College Research Program (HCRP) and the Harvard Center for Research on Computation and Society Research Fellowship for Undergraduate Students. The second author was supported by the Harvard Mind, Brain, and Behavior Initiative. The authors also thank Sebastian Gehrmann for his helpful comments and discussion at the beginning of the project. The computations in this paper were run on the Odyssey cluster supported by the FAS Division of Science, Research Computing Group at Harvard University. | Training and testing are done in alternating steps: In each epoch, for training, we first present to an LSTM network 1000 samples in a given language, which are generated according to a certain discrete probability distribution supported on a closed finite interval. We then freeze all the weights in our model, exhaustively enumerate all the sequences in the language by their lengths, and determine the first $k$ shortest sequences whose outputs the model produces inaccurately. , experimented with 1, 2, 3, and 36 hidden units for $a^n b^n$ ; 2, 3, 4, and 36 hidden units for $a^n b^n c^n$ ; and 3, 4, 5, and 36 hidden units for $a^n b^n c^n d^n$ . , Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\dashv $ , in addition to the symbols in $\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions. |
b7708cbb50085eb41e306bd2248f1515a5ebada8 | b7708cbb50085eb41e306bd2248f1515a5ebada8_0 | Q: How do they get the formal languages?
Text: Introduction
Recurrent Neural Networks (RNNs) are powerful machine learning models that can capture and exploit sequential data. They have become standard in important natural language processing tasks such as machine translation BIBREF0 , BIBREF1 and speech recognition BIBREF2 . Despite the ubiquity of various RNN architectures in natural language processing, there still lies an unanswered fundamental question: What classes of languages can, empirically or theoretically, be learned by neural networks? This question has drawn much attention in the study of formal languages, with previous results on both the theoretical BIBREF3 , BIBREF4 and empirical capabilities of RNNs, showing that different RNN architectures can learn certain regular BIBREF5 , BIBREF6 , context-free BIBREF7 , BIBREF8 , and context-sensitive languages BIBREF9 .
In a common experimental setup for investigating whether a neural network can learn a formal language, one formulates a supervised learning problem where the network is presented one character at a time and predicts the next possible character(s). The performance of the network can then be evaluated based on its ability to recognize sequences shown in the training set and – more importantly – to generalize to unseen sequences. There are, however, various methods of evaluation in a language learning task. In order to define the generalization of a network, one may consider the length of the shortest sequence in a language whose output was incorrectly produced by the network, or the size of the largest accepted test set, or the accuracy on a fixed test set BIBREF10 , BIBREF11 , BIBREF9 , BIBREF12 . These formulations follow narrow and bounded evaluation schemes though: They often define a length threshold in the test set and report the performance of the model on this fixed set.
We acknowledge three unsettling issues with these formulations. First, the sequences in the training set are usually assumed to be uniformly or geometrically distributed, with little regard to the nature and complexity of the language. This assumption may undermine any conclusions drawn from empirical investigations, especially given that natural language is not uniformly distributed, an aspect that is known to affect learning in modern RNN architectures BIBREF13 . Second, in a test set where the sequences are enumerated by their lengths, if a network makes an error on a sequence of, say, length 7, but correctly recognizes longer sequences of length up to 1000, would we consider the model's generalization as good or bad? In a setting where we monitor only the shortest sequence that was incorrectly predicted by the network, this scheme clearly misses the potential success of the model after witnessing a failure, thereby misportraying the capabilities of the network. Third, the test sets are often bounded in these formulations, making it challenging to compare and contrast the performance of models if they attain full accuracy on their fixed test sets.
In the present work, we address these limitations by providing a more nuanced evaluation of the learning capabilities of RNNs. In particular, we investigate the effects of three different aspects of a network's generalization: data distribution, length-window, and network capacity. We define an informative protocol for assessing the performance of RNNs: Instead of training a single network until it has learned its training set and then evaluating it on its test set, as BIBREF9 do in their study, we monitor and test the network's performance at each epoch during the entire course of training. This approach allows us to study the stability of the solutions reached by the network. Furthermore, we do not restrict ourselves to a test set of sequences of fixed lengths during testing. Rather, we exhaustively enumerate all the sequences in a language by their lengths and then go through the sequences in the test set one by one until our network errs $k$ times, thereby providing a more fine-grained evaluation criterion of its generalization capabilities.
Our experimental evaluation is focused on the Long Short-Term Memory (LSTM) network BIBREF14 , a particularly popular RNN variant. We consider three formal languages, namely $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , and investigate how LSTM networks learn these languages under different training regimes. Our investigation leads to the following insights: (1) The data distribution has a significant effect on generalization capability, with discrete uniform and U-shaped distributions often leading to the best generalization amongst all the four distributions in consideration. (2) Widening the training length-window, naturally, enables LSTM models to generalize better to longer sequences, and interestingly, the networks seem to learn to generalize to shorter sequences when trained on long sequences. (3) Higher model capacity – having more hidden units – leads to better stability, but not necessarily better generalization levels. In other words, over-parameterized models are more stable than models with theoretically sufficient but far fewer parameters. We explain this phenomenon by conjecturing that a collaborative counting mechanism arises in over-parameterized networks.
Related Work
It has been shown that RNNs with a finite number of states can process regular languages by acting like a finite-state automaton using different units in their hidden layers BIBREF5 , BIBREF6 . RNNs, however, are not limited to recognizing only regular languages. BIBREF3 and BIBREF4 showed that first-order RNNs (with rational state weights and infinite numeric precision) can simulate a pushdown automaton with two-stacks, thereby demonstrating that RNNs are Turing-complete. In theory, RNNs with infinite numeric precision are capable of expressing recursively enumerable languages. Yet, in practice, modern machine architectures do not contain computational structures that support infinite numeric precision. Thus, the computational power of RNNs with finite precision may not necessarily be the same as that of RNNs with infinite precision.
BIBREF7 investigated the learning capabilities of simple RNNs to process and formalize a context-free grammar containing hierarchical (recursively embedded) dependencies: He observed that distinct parts of the networks were able to learn some complex representations to encode certain grammatical structures and dependencies of the context-free grammar. Later, BIBREF8 introduced an RNN with an external stack memory to learn simple context-free languages, such as $a^n b^m$ , $a^nb^ncb^ma^m$ , and $a^{n+m} b^n c^m$ . Similar studies BIBREF15 , BIBREF16 , BIBREF17 , BIBREF10 , BIBREF11 have explored the existence of stable counting mechanisms in simple RNNs, which would enable them to learn various context-free and context-sensitive languages, but none of the RNN architectures proposed in the early days were able to generalize the training set to longer (or more complex) test samples with substantially high accuracy.
BIBREF9 , on the other hand, proposed a variant of Long Short-Term Memory (LSTM) networks to learn two context-free languages, $a^n b^n$ , $a^n b^m B^m A^n$ , and one strictly context-sensitive language, $a^n b^n c^n$ . Given only a small fraction of samples in a formal language, with values of $n$ (and $m$ ) ranging from 1 to a certain training threshold $N$ , they trained an LSTM model until its full convergence on the training set and then tested it on a more generalized set. They showed that their LSTM model outperformed the previous approaches in capturing and generalizing the aforementioned formal languages. By analyzing the cell states and the activations of the gates in their LSTM model, they further demonstrated that the network learns how to count up and down at certain places in the sample sequences to encode information about the underlying structure of each of these formal languages.
Following this approach, BIBREF19 and BIBREF20 studied the stability of the LSTM networks in learning context-free and context-sensitive languages and examined the processing mechanism developed by the hidden states during the training phase. They observed that the weight initialization of the hidden states in the LSTM network had a significant effect on the inductive capabilities of the model and that the solutions were often unstable in the sense that the numbers up to which the LSTM models were able to generalize using the training dataset sporadically oscillated.
The Sequence Prediction Task
Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\dashv $ , in addition to the symbols in $\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions. During testing, we use an output threshold criterion of $0.5$ for the sigmoid output layer to indicate which characters were predicted by the model. We then turn this prediction task into a classification task by accepting a sample if our model predicts all of its output values correctly and rejecting it otherwise.
Languages
We consider the following three formal languages in our predictions tasks: $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , where $n \ge 1$ . Of these three languages, the first one is a context-free language and the last two are strictly context-sensitive languages. Table 1 provides example input-output pairs for these languages under the sequence prediction task. In the rest of this section, we formulate the sequence prediction task for each language in more detail.
The input vocabulary $\mathcal {V}^{(i)}$ for $a^n b^n$ consists of $a$ and $b$ . The output vocabulary $\mathcal {V}^{(o)}$ is the union of $\mathcal {V}^{(i)}$ and $\lbrace \dashv \rbrace $ . Therefore, the input vectors are 2-dimensional, and the output vectors are 3-dimensional. Before the occurrence of the first $b$ in a sequence, the model always predicts $a$ or $b$ (which we notate $a^n b^n$0 ) whenever it sees an $a^n b^n$1 . However, after it encounters the first $a^n b^n$2 , the rest of the sequence becomes entirely deterministic: Assuming that the model observes $a^n b^n$3 $a^n b^n$4 's in a sequence, it outputs $a^n b^n$5 $a^n b^n$6 's for the next $a^n b^n$7 $a^n b^n$8 's and the terminal symbol $a^n b^n$9 for the last $a$0 in the sequence. Summarizing, we define the input-target scheme for $a$1 as follows:
$$a^n b^n \Rightarrow (a/b)^n b^{n-1}\dashv $$ (Eq. 8)
The input vocabulary $\mathcal {V}^{(i)}$ for $a^n b^n c^n$ consists of three characters: $a$ , $b$ , and $c$ . The output vocabulary $\mathcal {V}^{(o)}$ is $\mathcal {V}^{(i)} \cup \lbrace \dashv \rbrace $ . The input and output vectors are 3- and 4-dimensional, respectively. The input-target scheme for $a^n b^n c^n$ is:
$$a^n b^n c^n\Rightarrow (a/b)^{n}b^{n-1}c^{n}\dashv $$ (Eq. 10)
The vocabulary $\mathcal {V}^{(i)}$ for the last language $a^n b^n c^n d^n$ consists of $a$ , $b$ , $c$ , and $d$ . The input vectors are 4-dimensional, and the output vectors are 5-dimensional. As in the case of the previous two languages, a sequence becomes entirely deterministic after the observance of the first $b$ , hence the input-target scheme for $a^n b^n c^n d^n$ is:
$$a^n b^n c^n d^n\Rightarrow (a/b)^n b^{n-1} c^n d^{n}\dashv $$ (Eq. 12)
The LSTM Model
We use a single-layer LSTM model to perform the sequence prediction task, followed by a linear layer that maps to the output vocabulary size. The linear layer is followed by a sigmoid unit layer. The loss is the sum of the mean squared error between the prediction and the correct output at each character. See Figure 1 for an illustration. In our implementation, we used the standard LSTM module in PyTorch BIBREF22 and initialized the initial hidden and cell states, $h_0$ and $c_0$ , to zero.
Training and Testing
Training and testing are done in alternating steps: In each epoch, for training, we first present to an LSTM network 1000 samples in a given language, which are generated according to a certain discrete probability distribution supported on a closed finite interval. We then freeze all the weights in our model, exhaustively enumerate all the sequences in the language by their lengths, and determine the first $k$ shortest sequences whose outputs the model produces inaccurately. We remark, for the sake of clarity, that our test design is slightly different from the traditional testing approaches used by BIBREF10 , BIBREF9 , BIBREF12 , since we do not consider the shortest sequence in a language whose output was incorrectly predicted by the model, or the largest accepted test set, or the accuracy of the model on a fixed test set.
Our testing approach, as we will see shortly in the following subsections, gives more information about the inductive capabilities of our LSTM networks than the previous techniques and proves itself to be useful especially in the cases where the distribution of the length of our training dataset is skewed towards one of the boundaries of the distribution's support. For instance, LSTM models sometimes fail to capture some of the short sequences in a language during the testing phase, but they then predict a large number of long sequences correctly. If we were to report only the shortest sequence whose output our model incorrectly predicts, we would then be unable to capture the model's inductive capabilities. Furthermore, we test and report the performance of the model after each full pass of the training set. Finally, in all our investigations, we repeated each experiment ten times. In each trial, we only changed the weights of the hidden states of the model – all the other parameters were kept the same.
Length Distributions
Previous studies have examined various length distribution models to generate appropriate training sets for each formal language: BIBREF16 , BIBREF11 , BIBREF12 , for instance, used length distributions that were skewed towards having more short sequences than long sequences given a training length-window, whereas BIBREF9 used a uniform distribution scheme to generate their training sets. The latter briefly comment that the distribution of lengths of sequences in the training set does influence the generalization ability and convergence speed of neural networks, and mention that training sets containing abundant numbers of both short and long sequences are learned by networks much more quickly than uniformly distributed regimes. Nevertheless, they do not systematically compare or explicitly report their findings. To study the effect of various length distributions on the learning capability and speed of LSTM models, we experimented with four discrete probability distributions supported on bounded intervals (Figure 2 ) to sample the lengths of sequences for the languages. We briefly recall the probability distribution functions for discrete uniform and Beta-Binomial distributions used in our data generation procedure.
Given $N \in \mathbb {N}$ , if a random variable $X \sim U (1, N)$ , then the probability distribution function of $X$ is given as follows: $ P(x) = {\left\lbrace \begin{array}{ll} \frac{1}{N} & \text{if } x \in \lbrace 1, \ldots , N\rbrace \\ 0 & \text{otherwise.} \end{array}\right.} $
To generate training data with uniformly distributed lengths, we simply draw $n$ from $U (1, N)$ as defined above.
Similarly, given $N \in \mathbb {Z}^{\ge 0}$ and two parameters $\alpha $ and $ \beta \in \mathbb {R}^{>0}$ , if a random variable $X \sim \text{BetaBin} (N, \alpha , \beta )$ , then the probability distribution function of $X$ is given as follows: $ P(x) = {\left\lbrace \begin{array}{ll} \binom{N}{x} \frac{B(x+\alpha , N-x+\beta )}{B(\alpha , \beta )} & \text{if } x \in \lbrace 0, \ldots , N\rbrace \\ 0 & \text{otherwise.} \end{array}\right.} $
where $B(\alpha , \beta )$ is the Beta function. We set different values of $\alpha $ and $\beta $ as such in order to generate the following distributions:
U-shaped ( $\alpha = 0.25$ , $\beta = 0.25$ ): The probabilities of having short and long sequences are equally high, but the probability of having an average-length sequence is low.
Right-tailed ( $\alpha = 1$ , $\beta = 5$ ): Short sequences are more probable than long sequences.
Left-tailed ( $\alpha = 5$ , $\beta = 1$ ): Long sequences are more probable than short sequences.
Figure 3 exhibits the generalization graphs for the three formal languages trained with LSTM models under different length distribution regimes. Each single-color sequence in a generalization graph shows the average performance of ten LSTMs trained under the same settings but with different weight initializations. In all these experiments, the training sets had the same length-window $[1, 50]$ . On the other hand, we used 2, 3, and 4 hidden units in our LSTM architectures for the languages $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , respectively. The top three plots show the average lengths of the shortest sequences ( $e_1$ ) whose outputs were incorrectly predicted by the model at test time, whereas the bottom plots show the fifth such shortest lengths ( $e_5$ ). We note that the models trained on uniformly distributed samples seem to perform the best amongst all the four distributions in all the three languages. Furthermore, for the languages $a^n b^n c^n$ and $a^n b^n c^n d^n$ , the U-shaped Beta-Binomial distribution appears to help the LSTM models generalize better than the left- and right-tailed Beta Binomial distributions, in which the lengths of the samples are intentionally skewed towards one end of the training length-window.
When we look at the plots for the $e_1$ values, we observe that all the distribution regimes seem to facilitate learning at least up to the longest sequences in their respective training datasets, drawn by the light blue horizontal lines on the plots, except for the left-tailed Beta-Binomial distribution for which we see errors at lengths shorter than the training length threshold in the languages $a^n b^n c^n$ and $a^n b^n c^n d^n$ . For instance, if we were to consider only the $e_1$ values in our analysis, it would be tempting to argue that the model trained under the left-tailed Beta-Binomial distribution regime did not learn to recognize the language $a^n b^n c^n d^n$ . By looking at the $e_5$ values, in addition to the $e_1$ values, we however realize that the model was actually learning many of the sequences in the language, but it was just struggling to recognize and correctly predict the outputs of some of the short sequences in the language. This phenomenon can be explained by the under-representation of short sequences in left-tailed Beta-Binomial distributions. Our observation clearly emphasizes the significance of looking beyond $e_1$ , the shortest error length at test time, in order to obtain a more complete picture of the model's generalizing capabilities.
Length Windows
Most of the previous studies trained networks on sequences of lengths $n \in [1, N]$ , where typical $N$ values were between 10 and 50 BIBREF11 , BIBREF9 , and more recently 100 BIBREF23 . To determine the impact of the choice of training length-window on the stability and inductive capabilities of the LSTM networks, we experimented with three different length-windows for $n$ : $[1, 30]$ , $[1, 50]$ , and $[50, 100]$ . In the third window setting $[50, 100]$ , we further wanted to see whether LSTM are capable of generalizing to short sequences that are contained in the window range $[1, 50]$ , as well as to sequences that are longer than the sequences seen in the training set.
Model Capacity
It has been shown by BIBREF9 that LSTMs can learn $a^n b^n$ and $a^n b^n c^n$ with 1 and 2 hidden units, respectively. Similarly, BIBREF24 demonstrated that a simple RNN architecture containing a single hidden unit with carefully tuned parameters can develop a canonical linear counting mechanism to recognize the simple context-free language $a^n b^n$ , for $n \le 250$ . We wanted to explore whether the stability of the networks would improve with an increase in capacity of the LSTM model. We, therefore, varied the number of hidden units in our LSTM models as follows. We experimented with 1, 2, 3, and 36 hidden units for $a^n b^n$ ; 2, 3, 4, and 36 hidden units for $a^n b^n c^n$ ; and 3, 4, 5, and 36 hidden units for $a^n b^n c^n d^n$ . The 36 hidden unit case represents an over-parameterized network with more than enough theoretical capacity to recognize all these languages.
Training Length Windows
Figure 4 shows the generalization graphs for the three formal languages trained with LSTM models under different training windows. We note that enlarging the training length-window, naturally, enables an LSTM model to generalize far beyond its training length threshold. Besides, we see that the models with the training length-window of $[50, 100]$ performed slightly better than the other two window ranges in the case of $a^n b^n c^n$ (green line, bottom middle plot). Moreover, we acknowledge the capability of LSTMs to recognize longer sequences, as well as shorter sequences. For instance, when trained on the training length-window $[50, 100]$ , our models learned to recognize not only the longer sequences but also the shorter sequences not presented in the training sets for the languages $a^n b^n$ and $a^n b^n c^n$ .
Finally, we highlight the importance of the $e_5$ values once again: If we were to consider only the $e_1$ values, for instance, we would not have captured the inductive learning capabilities of the models trained with a length-window of $[50, 100]$ in the case of $a^n b^n c^n$ , since the models always failed at recognizing the shortest sequence $ab$ in the language. Yet, considering $e_5$ values helped us evaluate the performance of the LSTM models more accurately.
Number of Hidden Units
There seems to be a positive correlation between the number of hidden units in an LSTM network and its stability while learning a formal language. As Figure 5 demonstrates, increasing the number of hidden units in an LSTM network both increases the network's stability and also leads to faster convergence. However, it does not necessarily result in a better generalization. We conjecture that, with more hidden units, we simply offer more resources to our LSTM models to regulate their hidden states to learn these languages. The next section supports this hypothesis by visualizing the hidden state activations during sequence processing.
Discussion
In addition to the analysis of our empirical results in the previous section, we would like to touch upon two important characteristics of LSTM models when they learn formal languages, namely the convergence issue and counting behavior of LSTM models.
Conclusion
In this paper, we have addressed the influence of various length distribution regimes and length-window sizes on the generalizing ability of LSTMs to learn simple context-free and context-sensitive languages, namely $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ . Furthermore, we have discussed the effect of the number of hidden units in LSTM models on the stability of a representation learned by the network: We show that increasing the number of hidden units in an LSTM model improves the stability of the network, but not necessarily the inductive power. Finally, we have exhibited the importance of weight initialization to the convergence of the network: Our results indicate that different hidden weight initializations can yield different convergence values, given that all the other parameters are unchanged. Throughout our analysis, we emphasized the importance of a fine-grained evaluation, considering generalization beyond the first error and during training. We therefore concluded that there are an abundant number of parameters that can influence the inductive ability of an LSTM to learn a formal language and that the notion of learning, from a neural network's perspective, should be treated carefully.
Acknowledgment
The first author gratefully acknowledges the support of the Harvard College Research Program (HCRP) and the Harvard Center for Research on Computation and Society Research Fellowship for Undergraduate Students. The second author was supported by the Harvard Mind, Brain, and Behavior Initiative. The authors also thank Sebastian Gehrmann for his helpful comments and discussion at the beginning of the project. The computations in this paper were run on the Odyssey cluster supported by the FAS Division of Science, Research Computing Group at Harvard University. | These are well-known formal languages some of which was used in the literature to evaluate the learning capabilities of RNNs. |
17988d65e46ff7d756076e9191890aec177b081e | 17988d65e46ff7d756076e9191890aec177b081e_0 | Q: Are the unobserved samples from the same distribution as the training data?
Text: Introduction
Recurrent Neural Networks (RNNs) are powerful machine learning models that can capture and exploit sequential data. They have become standard in important natural language processing tasks such as machine translation BIBREF0 , BIBREF1 and speech recognition BIBREF2 . Despite the ubiquity of various RNN architectures in natural language processing, there still lies an unanswered fundamental question: What classes of languages can, empirically or theoretically, be learned by neural networks? This question has drawn much attention in the study of formal languages, with previous results on both the theoretical BIBREF3 , BIBREF4 and empirical capabilities of RNNs, showing that different RNN architectures can learn certain regular BIBREF5 , BIBREF6 , context-free BIBREF7 , BIBREF8 , and context-sensitive languages BIBREF9 .
In a common experimental setup for investigating whether a neural network can learn a formal language, one formulates a supervised learning problem where the network is presented one character at a time and predicts the next possible character(s). The performance of the network can then be evaluated based on its ability to recognize sequences shown in the training set and – more importantly – to generalize to unseen sequences. There are, however, various methods of evaluation in a language learning task. In order to define the generalization of a network, one may consider the length of the shortest sequence in a language whose output was incorrectly produced by the network, or the size of the largest accepted test set, or the accuracy on a fixed test set BIBREF10 , BIBREF11 , BIBREF9 , BIBREF12 . These formulations follow narrow and bounded evaluation schemes though: They often define a length threshold in the test set and report the performance of the model on this fixed set.
We acknowledge three unsettling issues with these formulations. First, the sequences in the training set are usually assumed to be uniformly or geometrically distributed, with little regard to the nature and complexity of the language. This assumption may undermine any conclusions drawn from empirical investigations, especially given that natural language is not uniformly distributed, an aspect that is known to affect learning in modern RNN architectures BIBREF13 . Second, in a test set where the sequences are enumerated by their lengths, if a network makes an error on a sequence of, say, length 7, but correctly recognizes longer sequences of length up to 1000, would we consider the model's generalization as good or bad? In a setting where we monitor only the shortest sequence that was incorrectly predicted by the network, this scheme clearly misses the potential success of the model after witnessing a failure, thereby misportraying the capabilities of the network. Third, the test sets are often bounded in these formulations, making it challenging to compare and contrast the performance of models if they attain full accuracy on their fixed test sets.
In the present work, we address these limitations by providing a more nuanced evaluation of the learning capabilities of RNNs. In particular, we investigate the effects of three different aspects of a network's generalization: data distribution, length-window, and network capacity. We define an informative protocol for assessing the performance of RNNs: Instead of training a single network until it has learned its training set and then evaluating it on its test set, as BIBREF9 do in their study, we monitor and test the network's performance at each epoch during the entire course of training. This approach allows us to study the stability of the solutions reached by the network. Furthermore, we do not restrict ourselves to a test set of sequences of fixed lengths during testing. Rather, we exhaustively enumerate all the sequences in a language by their lengths and then go through the sequences in the test set one by one until our network errs $k$ times, thereby providing a more fine-grained evaluation criterion of its generalization capabilities.
Our experimental evaluation is focused on the Long Short-Term Memory (LSTM) network BIBREF14 , a particularly popular RNN variant. We consider three formal languages, namely $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , and investigate how LSTM networks learn these languages under different training regimes. Our investigation leads to the following insights: (1) The data distribution has a significant effect on generalization capability, with discrete uniform and U-shaped distributions often leading to the best generalization amongst all the four distributions in consideration. (2) Widening the training length-window, naturally, enables LSTM models to generalize better to longer sequences, and interestingly, the networks seem to learn to generalize to shorter sequences when trained on long sequences. (3) Higher model capacity – having more hidden units – leads to better stability, but not necessarily better generalization levels. In other words, over-parameterized models are more stable than models with theoretically sufficient but far fewer parameters. We explain this phenomenon by conjecturing that a collaborative counting mechanism arises in over-parameterized networks.
Related Work
It has been shown that RNNs with a finite number of states can process regular languages by acting like a finite-state automaton using different units in their hidden layers BIBREF5 , BIBREF6 . RNNs, however, are not limited to recognizing only regular languages. BIBREF3 and BIBREF4 showed that first-order RNNs (with rational state weights and infinite numeric precision) can simulate a pushdown automaton with two-stacks, thereby demonstrating that RNNs are Turing-complete. In theory, RNNs with infinite numeric precision are capable of expressing recursively enumerable languages. Yet, in practice, modern machine architectures do not contain computational structures that support infinite numeric precision. Thus, the computational power of RNNs with finite precision may not necessarily be the same as that of RNNs with infinite precision.
BIBREF7 investigated the learning capabilities of simple RNNs to process and formalize a context-free grammar containing hierarchical (recursively embedded) dependencies: He observed that distinct parts of the networks were able to learn some complex representations to encode certain grammatical structures and dependencies of the context-free grammar. Later, BIBREF8 introduced an RNN with an external stack memory to learn simple context-free languages, such as $a^n b^m$ , $a^nb^ncb^ma^m$ , and $a^{n+m} b^n c^m$ . Similar studies BIBREF15 , BIBREF16 , BIBREF17 , BIBREF10 , BIBREF11 have explored the existence of stable counting mechanisms in simple RNNs, which would enable them to learn various context-free and context-sensitive languages, but none of the RNN architectures proposed in the early days were able to generalize the training set to longer (or more complex) test samples with substantially high accuracy.
BIBREF9 , on the other hand, proposed a variant of Long Short-Term Memory (LSTM) networks to learn two context-free languages, $a^n b^n$ , $a^n b^m B^m A^n$ , and one strictly context-sensitive language, $a^n b^n c^n$ . Given only a small fraction of samples in a formal language, with values of $n$ (and $m$ ) ranging from 1 to a certain training threshold $N$ , they trained an LSTM model until its full convergence on the training set and then tested it on a more generalized set. They showed that their LSTM model outperformed the previous approaches in capturing and generalizing the aforementioned formal languages. By analyzing the cell states and the activations of the gates in their LSTM model, they further demonstrated that the network learns how to count up and down at certain places in the sample sequences to encode information about the underlying structure of each of these formal languages.
Following this approach, BIBREF19 and BIBREF20 studied the stability of the LSTM networks in learning context-free and context-sensitive languages and examined the processing mechanism developed by the hidden states during the training phase. They observed that the weight initialization of the hidden states in the LSTM network had a significant effect on the inductive capabilities of the model and that the solutions were often unstable in the sense that the numbers up to which the LSTM models were able to generalize using the training dataset sporadically oscillated.
The Sequence Prediction Task
Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\dashv $ , in addition to the symbols in $\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions. During testing, we use an output threshold criterion of $0.5$ for the sigmoid output layer to indicate which characters were predicted by the model. We then turn this prediction task into a classification task by accepting a sample if our model predicts all of its output values correctly and rejecting it otherwise.
Languages
We consider the following three formal languages in our predictions tasks: $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , where $n \ge 1$ . Of these three languages, the first one is a context-free language and the last two are strictly context-sensitive languages. Table 1 provides example input-output pairs for these languages under the sequence prediction task. In the rest of this section, we formulate the sequence prediction task for each language in more detail.
The input vocabulary $\mathcal {V}^{(i)}$ for $a^n b^n$ consists of $a$ and $b$ . The output vocabulary $\mathcal {V}^{(o)}$ is the union of $\mathcal {V}^{(i)}$ and $\lbrace \dashv \rbrace $ . Therefore, the input vectors are 2-dimensional, and the output vectors are 3-dimensional. Before the occurrence of the first $b$ in a sequence, the model always predicts $a$ or $b$ (which we notate $a^n b^n$0 ) whenever it sees an $a^n b^n$1 . However, after it encounters the first $a^n b^n$2 , the rest of the sequence becomes entirely deterministic: Assuming that the model observes $a^n b^n$3 $a^n b^n$4 's in a sequence, it outputs $a^n b^n$5 $a^n b^n$6 's for the next $a^n b^n$7 $a^n b^n$8 's and the terminal symbol $a^n b^n$9 for the last $a$0 in the sequence. Summarizing, we define the input-target scheme for $a$1 as follows:
$$a^n b^n \Rightarrow (a/b)^n b^{n-1}\dashv $$ (Eq. 8)
The input vocabulary $\mathcal {V}^{(i)}$ for $a^n b^n c^n$ consists of three characters: $a$ , $b$ , and $c$ . The output vocabulary $\mathcal {V}^{(o)}$ is $\mathcal {V}^{(i)} \cup \lbrace \dashv \rbrace $ . The input and output vectors are 3- and 4-dimensional, respectively. The input-target scheme for $a^n b^n c^n$ is:
$$a^n b^n c^n\Rightarrow (a/b)^{n}b^{n-1}c^{n}\dashv $$ (Eq. 10)
The vocabulary $\mathcal {V}^{(i)}$ for the last language $a^n b^n c^n d^n$ consists of $a$ , $b$ , $c$ , and $d$ . The input vectors are 4-dimensional, and the output vectors are 5-dimensional. As in the case of the previous two languages, a sequence becomes entirely deterministic after the observance of the first $b$ , hence the input-target scheme for $a^n b^n c^n d^n$ is:
$$a^n b^n c^n d^n\Rightarrow (a/b)^n b^{n-1} c^n d^{n}\dashv $$ (Eq. 12)
The LSTM Model
We use a single-layer LSTM model to perform the sequence prediction task, followed by a linear layer that maps to the output vocabulary size. The linear layer is followed by a sigmoid unit layer. The loss is the sum of the mean squared error between the prediction and the correct output at each character. See Figure 1 for an illustration. In our implementation, we used the standard LSTM module in PyTorch BIBREF22 and initialized the initial hidden and cell states, $h_0$ and $c_0$ , to zero.
Training and Testing
Training and testing are done in alternating steps: In each epoch, for training, we first present to an LSTM network 1000 samples in a given language, which are generated according to a certain discrete probability distribution supported on a closed finite interval. We then freeze all the weights in our model, exhaustively enumerate all the sequences in the language by their lengths, and determine the first $k$ shortest sequences whose outputs the model produces inaccurately. We remark, for the sake of clarity, that our test design is slightly different from the traditional testing approaches used by BIBREF10 , BIBREF9 , BIBREF12 , since we do not consider the shortest sequence in a language whose output was incorrectly predicted by the model, or the largest accepted test set, or the accuracy of the model on a fixed test set.
Our testing approach, as we will see shortly in the following subsections, gives more information about the inductive capabilities of our LSTM networks than the previous techniques and proves itself to be useful especially in the cases where the distribution of the length of our training dataset is skewed towards one of the boundaries of the distribution's support. For instance, LSTM models sometimes fail to capture some of the short sequences in a language during the testing phase, but they then predict a large number of long sequences correctly. If we were to report only the shortest sequence whose output our model incorrectly predicts, we would then be unable to capture the model's inductive capabilities. Furthermore, we test and report the performance of the model after each full pass of the training set. Finally, in all our investigations, we repeated each experiment ten times. In each trial, we only changed the weights of the hidden states of the model – all the other parameters were kept the same.
Length Distributions
Previous studies have examined various length distribution models to generate appropriate training sets for each formal language: BIBREF16 , BIBREF11 , BIBREF12 , for instance, used length distributions that were skewed towards having more short sequences than long sequences given a training length-window, whereas BIBREF9 used a uniform distribution scheme to generate their training sets. The latter briefly comment that the distribution of lengths of sequences in the training set does influence the generalization ability and convergence speed of neural networks, and mention that training sets containing abundant numbers of both short and long sequences are learned by networks much more quickly than uniformly distributed regimes. Nevertheless, they do not systematically compare or explicitly report their findings. To study the effect of various length distributions on the learning capability and speed of LSTM models, we experimented with four discrete probability distributions supported on bounded intervals (Figure 2 ) to sample the lengths of sequences for the languages. We briefly recall the probability distribution functions for discrete uniform and Beta-Binomial distributions used in our data generation procedure.
Given $N \in \mathbb {N}$ , if a random variable $X \sim U (1, N)$ , then the probability distribution function of $X$ is given as follows: $ P(x) = {\left\lbrace \begin{array}{ll} \frac{1}{N} & \text{if } x \in \lbrace 1, \ldots , N\rbrace \\ 0 & \text{otherwise.} \end{array}\right.} $
To generate training data with uniformly distributed lengths, we simply draw $n$ from $U (1, N)$ as defined above.
Similarly, given $N \in \mathbb {Z}^{\ge 0}$ and two parameters $\alpha $ and $ \beta \in \mathbb {R}^{>0}$ , if a random variable $X \sim \text{BetaBin} (N, \alpha , \beta )$ , then the probability distribution function of $X$ is given as follows: $ P(x) = {\left\lbrace \begin{array}{ll} \binom{N}{x} \frac{B(x+\alpha , N-x+\beta )}{B(\alpha , \beta )} & \text{if } x \in \lbrace 0, \ldots , N\rbrace \\ 0 & \text{otherwise.} \end{array}\right.} $
where $B(\alpha , \beta )$ is the Beta function. We set different values of $\alpha $ and $\beta $ as such in order to generate the following distributions:
U-shaped ( $\alpha = 0.25$ , $\beta = 0.25$ ): The probabilities of having short and long sequences are equally high, but the probability of having an average-length sequence is low.
Right-tailed ( $\alpha = 1$ , $\beta = 5$ ): Short sequences are more probable than long sequences.
Left-tailed ( $\alpha = 5$ , $\beta = 1$ ): Long sequences are more probable than short sequences.
Figure 3 exhibits the generalization graphs for the three formal languages trained with LSTM models under different length distribution regimes. Each single-color sequence in a generalization graph shows the average performance of ten LSTMs trained under the same settings but with different weight initializations. In all these experiments, the training sets had the same length-window $[1, 50]$ . On the other hand, we used 2, 3, and 4 hidden units in our LSTM architectures for the languages $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , respectively. The top three plots show the average lengths of the shortest sequences ( $e_1$ ) whose outputs were incorrectly predicted by the model at test time, whereas the bottom plots show the fifth such shortest lengths ( $e_5$ ). We note that the models trained on uniformly distributed samples seem to perform the best amongst all the four distributions in all the three languages. Furthermore, for the languages $a^n b^n c^n$ and $a^n b^n c^n d^n$ , the U-shaped Beta-Binomial distribution appears to help the LSTM models generalize better than the left- and right-tailed Beta Binomial distributions, in which the lengths of the samples are intentionally skewed towards one end of the training length-window.
When we look at the plots for the $e_1$ values, we observe that all the distribution regimes seem to facilitate learning at least up to the longest sequences in their respective training datasets, drawn by the light blue horizontal lines on the plots, except for the left-tailed Beta-Binomial distribution for which we see errors at lengths shorter than the training length threshold in the languages $a^n b^n c^n$ and $a^n b^n c^n d^n$ . For instance, if we were to consider only the $e_1$ values in our analysis, it would be tempting to argue that the model trained under the left-tailed Beta-Binomial distribution regime did not learn to recognize the language $a^n b^n c^n d^n$ . By looking at the $e_5$ values, in addition to the $e_1$ values, we however realize that the model was actually learning many of the sequences in the language, but it was just struggling to recognize and correctly predict the outputs of some of the short sequences in the language. This phenomenon can be explained by the under-representation of short sequences in left-tailed Beta-Binomial distributions. Our observation clearly emphasizes the significance of looking beyond $e_1$ , the shortest error length at test time, in order to obtain a more complete picture of the model's generalizing capabilities.
Length Windows
Most of the previous studies trained networks on sequences of lengths $n \in [1, N]$ , where typical $N$ values were between 10 and 50 BIBREF11 , BIBREF9 , and more recently 100 BIBREF23 . To determine the impact of the choice of training length-window on the stability and inductive capabilities of the LSTM networks, we experimented with three different length-windows for $n$ : $[1, 30]$ , $[1, 50]$ , and $[50, 100]$ . In the third window setting $[50, 100]$ , we further wanted to see whether LSTM are capable of generalizing to short sequences that are contained in the window range $[1, 50]$ , as well as to sequences that are longer than the sequences seen in the training set.
Model Capacity
It has been shown by BIBREF9 that LSTMs can learn $a^n b^n$ and $a^n b^n c^n$ with 1 and 2 hidden units, respectively. Similarly, BIBREF24 demonstrated that a simple RNN architecture containing a single hidden unit with carefully tuned parameters can develop a canonical linear counting mechanism to recognize the simple context-free language $a^n b^n$ , for $n \le 250$ . We wanted to explore whether the stability of the networks would improve with an increase in capacity of the LSTM model. We, therefore, varied the number of hidden units in our LSTM models as follows. We experimented with 1, 2, 3, and 36 hidden units for $a^n b^n$ ; 2, 3, 4, and 36 hidden units for $a^n b^n c^n$ ; and 3, 4, 5, and 36 hidden units for $a^n b^n c^n d^n$ . The 36 hidden unit case represents an over-parameterized network with more than enough theoretical capacity to recognize all these languages.
Training Length Windows
Figure 4 shows the generalization graphs for the three formal languages trained with LSTM models under different training windows. We note that enlarging the training length-window, naturally, enables an LSTM model to generalize far beyond its training length threshold. Besides, we see that the models with the training length-window of $[50, 100]$ performed slightly better than the other two window ranges in the case of $a^n b^n c^n$ (green line, bottom middle plot). Moreover, we acknowledge the capability of LSTMs to recognize longer sequences, as well as shorter sequences. For instance, when trained on the training length-window $[50, 100]$ , our models learned to recognize not only the longer sequences but also the shorter sequences not presented in the training sets for the languages $a^n b^n$ and $a^n b^n c^n$ .
Finally, we highlight the importance of the $e_5$ values once again: If we were to consider only the $e_1$ values, for instance, we would not have captured the inductive learning capabilities of the models trained with a length-window of $[50, 100]$ in the case of $a^n b^n c^n$ , since the models always failed at recognizing the shortest sequence $ab$ in the language. Yet, considering $e_5$ values helped us evaluate the performance of the LSTM models more accurately.
Number of Hidden Units
There seems to be a positive correlation between the number of hidden units in an LSTM network and its stability while learning a formal language. As Figure 5 demonstrates, increasing the number of hidden units in an LSTM network both increases the network's stability and also leads to faster convergence. However, it does not necessarily result in a better generalization. We conjecture that, with more hidden units, we simply offer more resources to our LSTM models to regulate their hidden states to learn these languages. The next section supports this hypothesis by visualizing the hidden state activations during sequence processing.
Discussion
In addition to the analysis of our empirical results in the previous section, we would like to touch upon two important characteristics of LSTM models when they learn formal languages, namely the convergence issue and counting behavior of LSTM models.
Conclusion
In this paper, we have addressed the influence of various length distribution regimes and length-window sizes on the generalizing ability of LSTMs to learn simple context-free and context-sensitive languages, namely $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ . Furthermore, we have discussed the effect of the number of hidden units in LSTM models on the stability of a representation learned by the network: We show that increasing the number of hidden units in an LSTM model improves the stability of the network, but not necessarily the inductive power. Finally, we have exhibited the importance of weight initialization to the convergence of the network: Our results indicate that different hidden weight initializations can yield different convergence values, given that all the other parameters are unchanged. Throughout our analysis, we emphasized the importance of a fine-grained evaluation, considering generalization beyond the first error and during training. We therefore concluded that there are an abundant number of parameters that can influence the inductive ability of an LSTM to learn a formal language and that the notion of learning, from a neural network's perspective, should be treated carefully.
Acknowledgment
The first author gratefully acknowledges the support of the Harvard College Research Program (HCRP) and the Harvard Center for Research on Computation and Society Research Fellowship for Undergraduate Students. The second author was supported by the Harvard Mind, Brain, and Behavior Initiative. The authors also thank Sebastian Gehrmann for his helpful comments and discussion at the beginning of the project. The computations in this paper were run on the Odyssey cluster supported by the FAS Division of Science, Research Computing Group at Harvard University. | No |
11c77ee117cb4de825016b6ccff59ff021f84a38 | 11c77ee117cb4de825016b6ccff59ff021f84a38_0 | Q: By how much does their model outperform the baseline in the cross-domain evaluation?
Text: Introduction
Sentiment Analysis (SA) is an active field of research in Natural Language Processing and deals with opinions in text. A typical application of classical SA in an industrial setting would be to classify a document like a product review into positve, negative or neutral sentiment polarity.
In constrast to SA, the more fine-grained task of Aspect Based Sentiment Analysis (ABSA) BIBREF0, BIBREF1 aims at finding both the aspect of an entity like a restaurant and the sentiment associated with this aspect.
It is important to note that ABSA comes in two variants. We will use the sentence “I love their dumplings” to explain these variants in detail.
Both variants are implemented as a two-step procedure. The first variant is comprised of Aspect-Category Detection (ACD) followed by Aspect-Category Sentiment Classification (ACSC). ACD is a multilabel classification task, where a sentence can be associated with a set of predefined aspect categories like "food" and "service" in the restaurants domain. In the second step, ACSC, the sentiment polarity associated to the aspect is classified. For our example-sentence the correct result is (“food”, “positive”).
The second variant consists of Aspect-Target Extraction (ATE) followed by Aspect-Target Sentiment Classification (ATSC). ATE is a sequence labeling task, where terms like “dumplings” are detected. In the second step, ATSC, the sentiment polarity associated to the aspect-target is determined. In our example the correct result is the tuple ("dumplings", "positive").
In this work, we focus on ATSC. In the last years, specialized neural architectures BIBREF2, BIBREF3 have been developed that substantially improved modeling of this target-context relationship. More recently, the Natural Language Processing community experienced a substantial shift towards using pre-trained language models BIBREF4, BIBREF5, BIBREF6, BIBREF7 as a base for many down-stream tasks, including ABSA BIBREF8, BIBREF9, BIBREF10. We still see huge potential that comes with this trend, this is why we approach the ATSC task using the BERT architecture.
As shown by BIBREF9, for the ATSC task the performance of models that were pre-trained on general text corpora is improved substantially by finetuning the model on domain-specific corpora — in their case review corpora — that have not been used for pre-training BERT, or other language models.
We extend the work by Xu et al. by further investigating the behavior of finetuning the BERT language model in relation to ATSC performance. In particular, our contributions are:
The analysis of the influence of the amount of training-steps used for BERT language model finetuning on the Aspect-Target Sentiment Classification performance.
The findings on how to exploit BERT language model finetuning enables us to achieve new state-of-the-art performance on the SemEval 2014 restaurants dataset.
The analysis of cross-domain adaptation between the laptops and restaurants domain. Adaptation is tested by finetuning the BERT language model self-supervised on the target-domain and then supervised training on the ATSC task in the source-domain. In addition, the performance of training on the combination of both datasets is measured.
Related Works
We separate our discussion of related work into two areas: First, neural methods applied to ATSC that have improved performance solely by model architecture improvements. Secondly, methods that additionally aim to transfer knowledge from semantically related tasks or domains.
Related Works ::: Architecture Improvements for Aspect-Target Sentiment Classification
The datasets typically used for Aspect-Target Sentiment Classification are the SemEval 2014 Task 4 datasets BIBREF1 for the restaurants and laptops domain. Unfortunately, both datasets only have a small number of training examples. One common approach to compensate for insufficient training examples is to invent neural architectures that better model ATSC. For example, in the past a big leap in classification performance was achieved with the use of the Memory Network architecture BIBREF3, which uses memory to remember context words and explicitly models attention over both the target word and context. It was found that making full use of context words improves their model compared to previous models BIBREF2 that make use of left- and right-sided context independently.
BIBREF8 proposed Attention Encoder Networks (AEN), a modification to the transformer architecture. The authors split the Multi-Head Attention (MHA) layers into Intra-MHA and Inter-MHA layers in order to model target words and context differently, which results in a more lightweight model compared to the transformer architecture.
Another recent performance leap was achieved by BIBREF11, who model dependencies between sentiment words explicitly in sentences with more than one aspect-target by using a graph convolutional neural network. They show that their architecture performs particularly well if multiple aspects are present in a sentence.
Related Works ::: Knowledge Transfer for Aspect-Target Sentiment Classification Analysis
Another approach to compensate for insufficient training examples is to transfer knowledge across domains or across similar tasks.
BIBREF12 proposed Multi-Granularity Alignment Networks (MGAN). They use this architecture to transfer knowledge from both an aspect-category classification task and also across different domains. They built a large scale aspect-category dataset specifically for this.
BIBREF13 transfer knowledge from a document-level sentiment classification task trained on the amazon review dataset BIBREF14. They successfully apply pre-training by reusing the weights of a Long Short Term Memory (LSTM) network BIBREF15 that has been trained on the document-level sentiment task. In addition, they apply multi-task learning where aspect and document-level tasks are learned simultaneously by minimizing a joint loss function.
Similarly, BIBREF9 introduce a multi-task loss function to simultaneously optimize the BERT model's BIBREF7 pre-training objectives as well as a question answering task.
In contrast to the methods described above that aim to transfer knowledge from a different source task like question answering or document-level sentiment classification, this paper aims at transferring knowledge across different domains by finetuning the BERT language model.
Methodology
We approach the Aspect-Target Sentiment Classification task using a two-step procedure. We use the pre-trained BERT architecture as a basis. In the first step we finetune the pre-trained weights of the language model further in a self-supervised way on a domain-specific corpus. In the second step we train the finetuned language model in a supervised way on the ATSC end-task.
In the following subsections, we discuss the BERT architecture, how we finetune the language model, and how we transform the ATSC task into a BERT sequence-pair classification task BIBREF10. Finally, we discuss the different end-task training and domain-specific finetuning combinations we employ to evaluate our model's generalization performance not only in-domain but also cross-domain.
Methodology ::: BERT
The BERT model builds on many previous innovations: contextualized word representations BIBREF4, the transformer architecture BIBREF16, and pre-training on a language modeling task with subsequent end-to-end finetuning on a downstream task BIBREF5, BIBREF6. Due to being deeply bidirectional, the BERT architecture creates very powerful sequence representations that perform extremely well on many downstream tasks BIBREF7.
The main innovation of BERT is that instead of using the objective of next-word prediction a different objective is used to train the language model. This objective consists of 2 parts.
The first part is the masked language model objective, where the model learns to predict tokens, which have been randomly masked, from the context.
The second part is the next-sequence prediction objective, where the model needs to predict if a sequence $B$ would naturally follow the previous sequence $A$. This objective enables the model to capture long-term dependencies better. Both objectives are discussed in more detail in the next section.
As a base for our experiments we use the BERTBASE model, which has been pre-trained by the Google research team. It has the following parameters: 12 layers, 768 hidden dimensions per token and 12 attention heads. It has 110 Mio. parameters in total.
For finetuning the BERT language model on a specific domain we use the weights of BERTBASE as a starting point.
Methodology ::: BERT Language Model Finetuning
As the first step of our procedure we perform language model finetuning of the BERT model using domain-specific corpora. Algorithmically, this is equivalent to pre-training. The domain-specific language model finetuning as an intermediate step to ATSC has been shown by BIBREF9. As an extension to their paper we investigate the limits of language model finetuning in terms of how end-task performance is dependent on the amount of training steps.
The training input representation for language model finetuning consists of two sequences $s_A$ and $s_B$ in the format of $"\textrm {[CLS]} \ s_{A} \ \textrm {[SEP]} \ s_{B} \ \textrm {[SEP]}"$, where [CLS] is a dummy token used for downstream classification and [SEP] are separator tokens.
Methodology ::: BERT Language Model Finetuning ::: Masked Language Model Objective
The sequences $A$ and $B$ have tokens randomly masked out in order for the model to learn to predict them. The following example shows why domain-specific finetuning can alleviate the bias from pre-training on a Wikipedia corpus: "The touchscreen is an [MASK] device". In the fact-based context of Wikipedia the [MASK] could be "input" and in the review domain a typical guess could be the general opinion word "amazing".
Methodology ::: BERT Language Model Finetuning ::: Next-Sentence Prediction
In order to train BERT to capture long-term dependencies better, the model is trained to predict if sequence $B$ follows sequence $A$. If this is the case, sequence A and sequence B are jointly sampled from the same document in the order they are occuring naturally. Otherwise the sequences are sampled randomly from the training corpus.
Methodology ::: Aspect-Target Sentiment Classification
The ATSC task aims at classifying sentiment polarity into the three classes positive, negative, neutral with respect to an aspect-target. The input to the classifier is a tokenized sentence $s=s_{1:n}$ and a target $t=s_{j:j+m}$ contained in the sentence, where $j < j+m \le n$. Similar to previous work by BIBREF10, we transform the input into a format compatible with BERT sequence-pair classification tasks: $"\textrm {[CLS]} \ s \ \textrm {[SEP]} \ t \ \textrm {[SEP]}"$.
In the BERT architecture the position of the token embeddings is structurally maintained after each Multi-Head Attention layer. Therefore, we refer to the last hidden representation of the [CLS] token as $h_{[CLS]} \in \mathbf {R}^{768 \times 1}$. The number of sentiment polarity classes is three. A distribution $p \in [0,1]^3$ over these classes is predicted using a fully-connected layer with 3 output neurons on top of $h_{[CLS]}$, followed by a softmax activation function
where $b \in \mathbf {R}^3$ and $W \in \mathbf {R}^{3 \times 768}$. Cross-entropy is used as the training loss. The way we use BERT for classifying the sentiment polaritites is equivalent to how BERT is used for sequence-pair classification tasks in the original paper BIBREF7.
Methodology ::: Domain Adaptation through Language Model Finetuning
In academia, it is common that the performance of a machine learning model is evaluated in-domain. This means that the model is evaluated on a test set that comes from the same distribution as the training set. In real-world applications this setting is not always valid, as the trained model is used to predict previously unseen data.
In order to evaluate the performance of a machine learning model more robustly, its generalization error can be evaluated across different domains, i.e. cross-domain. Additionally, the model itself can be adapted towards a target domain. This is known as Domain Adaptation, which is a special case of Transductive Transfer Learning in the taxonomy of BIBREF17. Here, it is typically assumed that supervised data for a specific task is only available for a source domain $S$, whereas only unsupervised data is available in the target domain $T$. The goal is to optimize performance of the task in the target domain while transferring task-specific knowledge from the source domain.
If we map this framework to our challenge, we define Aspect-Target Sentiment Classification as the transfer-task and BERT language model finetuning is used for domain adaptation. In terms of on which domain is finetuned on, the full transfer-procedure can be expressed in the following way:
Here, $D_{LM}$ stands for the domain on which the language model is finetuned and can take on the values of Restaurants, Laptops or (Restaurants $\cup $ Laptops). The domain for training $D_{Train}$ can take on the same values, for the joint case case the training datasets for laptops and restaurants are simply combined. The domain for testing $D_{Test}$ can only be take on the values Restaurants or Laptops.
Combining finetuning and training steps gives us nine different evaluation scenarios, which we group into the following four categories:
Methodology ::: In-Domain Training
ATSC is trained on a domain-specific dataset and evaluated on the test set from the same domain. This can be expressed as
$D_{LM} \rightarrow T \rightarrow T,$ where $T$ is our target domain and can be either Laptops or Restaurants. It is expected that the performance of the model is best if $D_{LM} = T$.
Methodology ::: Cross-Domain Training
ATSC is trained on a domain-specific dataset and evaluated on the test set from the other domain. This can be expressed as
$D_{LM} \rightarrow S \rightarrow T,$ where $S\ne T$ are source and target domain and can be either Laptops or Restaurants.
Methodology ::: Cross-Domain Adaptation
As a special case of cross-domain Training we expect performance to be optimal if $D_{LM} = T$. This is the variant of Domain Adaptation and is written as
$T \rightarrow S \rightarrow T.$
Methodology ::: Joint-Domain Training
ATSC is trained on both domain-specific datasets jointly and evaluated on both test sets independently. This can be expressed as
$D_{LM} \rightarrow (S \cup T) \rightarrow T,$ where $S\ne T$ are source- and target domain and can either be Laptops or Restaurants.
Experiments
In our experiments we aim to answer the following research questions (RQs):
RQ1: How does the number of training iterations in the BERT language model finetuning stage influence the ATSC end-task performance? At what point does performance start to improve, when does it converge?
RQ2: When trained in-domain, what ATSC end-task performance can be reached through fully exploitet finetuning of the BERT language model?
RQ3: When trained cross-domain in the special case of domain adaptation, what ATSC end-task performance can be reached if BERT language model finetuning is fully exploitet?
Experiments ::: Datasets for Classification and Language Model Finetuning
We conduct experiments using the two SemEval 2014 Task 4 Subtask 2 datasets BIBREF1 for the laptops and the restaurants domain. The two datasets contain sentences with multiple marked aspect terms that each have a 3-level sentiment polarity (positive, neutral or negative) associated. In the original dataset the conflict label is also present. Here, conflicting labels are dropped for reasons of comparability with BIBREF9. Both datasets are small, detailed statistics are shown in tab:datasets.
For BERT language model finetuning we prepare three corpora for the two domains of laptops and restaurants. For the restaurants domain we use Yelp Dataset Challenge reviews and for the laptops domain we use Amazon Laptop reviews BIBREF14. For the laptop domain we filtered out reviews that appear in the SemEval 2014 laptops dataset to avoid training bias for the test data. To be compatible with the next-sentence prediction task used during fine tuning, we removed reviews containing less than two sentences.
For the laptop corpus, $1,007,209$ sentences are left after pre-processing. For the restaurants domain more reviews are available, we sampled $10,000,000$ sentences to have a sufficient amount of data for fully exploitet language model finetuning. In order to compensate for the smaller amount of finetuning data in the laptops domain, we finetune for more epochs, 30 epochs in the case of the laptops domain compared to 3 epochs for the restaurants domain, so that the BERT model trains on about 30 million sentences in both cases. This means that 1 sentence can be seen multiple times with a different language model masking.
We also create a mixed corpus to jointly finetune both domains. Here, we sample 1 Mio. restaurant reviews and combine them with the laptop reviews. This results in about 2 Mio. reviews that are finetuned for 15 epochs. The exact statistics for the three finetuning corpora are shown in the top of tab:datasets.
To be able to reproduce our finetuning corpora, we make the code that is used to generate them available online.
Experiments ::: Hyperparameters
We use BERTBASE (uncased) as the base for all of our experiments, with the exception of XLNetBASE (cased), which is used as one of the baseline models.
For the BERT language model finetuning we use 32 bit floating point computations using the Adam optimizer BIBREF18. The batchsize is set to 32 while the learning rate is set to $3\cdot 10^{-5}$. The maximum input sequence length is set to 256 tokens, which amounts to about 4 sentences per sequence on average. As shown in tab:datasets, we finetune the language models on each domain so that the model trains a total of about 30 Mio. sentences (7.5 Mio. sequences).
For training the BERT and XLNet models on the down-stream task of ATSC we use mixed 16 bit and 32 bit floating point computations, the Adam optimizer, and a learning rate of $3\cdot 10^{-5}$ and a batchsize of 32. We train the model for a total of 7 epochs. The validation accuracy converges after about 3 epochs of training on all datasets, but training loss still improves after that.
It is important to note that all our results reported are the average of 9 runs with different random initializations. This is needed to measure significance of improvements, as the standard deviation in accuray amounts to roughly $1\%$ for all experiments, see fig:acc-dep-lmiterations.
Experiments ::: Compared Methods
We compare in-domain results to current state of the art methods, which we will now describe briefly.
SDGCN-BERT BIBREF11 explicitly models sentiment dependencies for sentences with multiple aspects with a graph convolutional network. This method is current state-of-the-art on the SemEval 2014 laptops dataset.
AEN-BERT BIBREF8 is an attentional encoder network. When used on top of BERT embeddings this method performs especially well on the laptops dataset.
BERT-SPC BIBREF8 is BERT used in sentence-pair classification mode. This is exactly the same method as our BERT-base baseline and therefore, we can cross-check the authors results.
BERT-PT BIBREF9 uses multi-task fine-tuning prior to downstream classification, where the BERT language model is finetuned jointly with a question answering task. It performs state-of-the-art on the restaurants dataset prior to this paper.
To our knowledge, cross- and joint-domain training on the SemEval 2014 Task 4 datasets has not been analyzed so far. Thus, we compare our method to two very strong baselines: BERT and XLNet.
BERT-base BIBREF7 is using the pre-trained BERTBASE embeddings directly on the down-stream task without any domain specific language model finetuning.
XLNet-base BIBREF19 is a method also based on general language model pre-training similar to BERT. Instead of randomly masking tokens for pre-training like in BERT a more general permutation objective is used, where all possible variants of masking are fully exploitet.
Our models are BERT models whose language model has been finetuned on different domain corpora.
BERT-ADA Lapt is the BERT language model finetuned on the laptops domain corpus.
BERT-ADA Rest is the BERT language model finetuned on the restaurant domain corpus.
BERT-ADA Joint is the BERT language model finetuned on the corpus containing an equal amount of laptops and restaurants reviews.
Experiments ::: Results Analysis
The results of our experiments are shown in fig:acc-dep-lmiterations and tab:results respectively.
To answer RQ1, which is concerned with details on domain-specific language model finetuning, we can see in fig:acc-dep-lmiterations that first of all, language model finetuning has a substantial effect on ATSC end-task performance. Secondly, we see that in the laptops domain the performance starts to increase at about 10 Mio. finetuned sentences. This is an interesting insight as one would expect a relation closer to a logarithmic curve. One reason might be that it takes many steps to train knowledge into the BERT language model due to its vast amount of parameters. The model already converges at around 17 Mio. sentences. More finetuning does not improve performance significantly. In addition, we find that different runs have a high variance, the standard deviation amounts to about $1\%$ in accuracy, which justifies averaging over 9 runs to measure differences in model performance reliably.
To answer RQ2, which is concerned with in-domain ATSC performance, we see in tab:results that for the in-domain training case, our models BERT-ADA Lapt and BERT-ADA Rest achieve performance close to state-of-the-art on the laptops dataset and new state-of-the-art on the restaurants dataset with accuracies of $79.19\%$ and $87.14\%$, respectively. On the restaurants dataset, this corresponds to an absolute improvement of $2.2\%$ compared to the previous state-of-the-art method BERT-PT. Language model finetuning produces a larger improvement on the restaurants dataset. We think that one reason for that might be that the restaurants domain is underrepresented in the pre-training corpora of BERTBASE. Generally, we find that language model finetuning helps even if the finetuning domain does not match the evaluation domain. We think the reason for this might be that the BERT-base model is pre-trained more on knowledge-based corpora like Wikipedia than on text containing opinions. Another finding is that BERT-ADA Joint performs better on the laptops dataset than BERT-ADA Rest, although the unique amount of laptop reviews are the same in laptops- and joint-corpora. We think that confusion can be created when mixing the domains, but this needs to be investigated further. We also find that the XLNet-base baseline performs generally stronger than BERT-base and even outperforms BERT-ADA Lapt with an accuracy of $79.89\%$ on the laptops dataset.
To answer RQ3, which is concerned with domain adaptation, we can see in the grayed out cells in tab:results, which correspond to the cross-domain adaption case where the BERT language model is trained on the target domain, that domain adaptation works well with $2.2\%$ absolute accuracy improvement on the laptops test set and even $3.6\%$ accuracy improvement on the restaurants test set compared to BERT-base.
In general, the ATSC task generalizes well cross-domain, with about 2-$3\%$ drop in accuracy compared to in-domain training. We think the reason for this might be that syntactical relationships between the aspect-target and the phrase expressing sentiment polarity as well as knowing the sentiment-polarity itself are sufficient to solve the ATSC task in many cases.
For the joint-training case, we find that combining both training datasets improves performance on both test sets. This result is intuitive, as more training data leads to better performance if the domains do not confuse each other. Interesting for the joint-training case is that the BERT-ADA Joint model performs especially strong when measured by the Macro-F1 metric. A reason for this might be that the SemEval 2014 datasets are imbalanced due to dominance of positive label. It seems like through finetuning the language model on both domains the model learns to classify the neutral class much better, especially in the laptops domain.
Conclusion
We performed experiments on the task of Aspect-Target Sentiment Classification by first finetuning a pre-trained BERT model on a domain specific corpus with subsequent training on the down-stream classification task.
We analyzed the behavior of the number of domain-specific BERT language model finetuning steps in relation to the end-task performance.
With the findings on how to best exploit BERT language model finetuning we were able to train high performing models, which one of even performs as new state-of-the-art on SemEval 2014 Task 4 restaurants dataset.
We further evaluated our models cross-domain to explore the robustness of Aspect-Target Sentiment Classification. We found that in general, this task transfers well between the laptops and the restaurants domain.
As a special case we ran a cross-domain adaptation experiments, where the BERT language model is specifically finetuned on the target domain. We achieve significant improvement over unadapted models, a cross-domain adapted model performs even better than a BERT-base model that is trained in-domain.
Overall, our findings reveal promising directions for follow-up work. The XLNet-base model performs strongly on the ATSC task. Here, domain-specific finetuning could probably bring significant performance improvements. Another interesting direction for future work would be to investigate cross-domain behavior for an additional domain like hotels, which is more similar to the restaurants domain. Here, it could be interesting to find out if the shared nature of these domain would results in more confusion or if they would behave synergetically. | $2.2\%$ absolute accuracy improvement on the laptops test set, $3.6\%$ accuracy improvement on the restaurants test set |
0b92fb692feb35d4b4bf4665f7754d283d6ad5f3 | 0b92fb692feb35d4b4bf4665f7754d283d6ad5f3_0 | Q: What are the performance results?
Text: Introduction
Sentiment Analysis (SA) is an active field of research in Natural Language Processing and deals with opinions in text. A typical application of classical SA in an industrial setting would be to classify a document like a product review into positve, negative or neutral sentiment polarity.
In constrast to SA, the more fine-grained task of Aspect Based Sentiment Analysis (ABSA) BIBREF0, BIBREF1 aims at finding both the aspect of an entity like a restaurant and the sentiment associated with this aspect.
It is important to note that ABSA comes in two variants. We will use the sentence “I love their dumplings” to explain these variants in detail.
Both variants are implemented as a two-step procedure. The first variant is comprised of Aspect-Category Detection (ACD) followed by Aspect-Category Sentiment Classification (ACSC). ACD is a multilabel classification task, where a sentence can be associated with a set of predefined aspect categories like "food" and "service" in the restaurants domain. In the second step, ACSC, the sentiment polarity associated to the aspect is classified. For our example-sentence the correct result is (“food”, “positive”).
The second variant consists of Aspect-Target Extraction (ATE) followed by Aspect-Target Sentiment Classification (ATSC). ATE is a sequence labeling task, where terms like “dumplings” are detected. In the second step, ATSC, the sentiment polarity associated to the aspect-target is determined. In our example the correct result is the tuple ("dumplings", "positive").
In this work, we focus on ATSC. In the last years, specialized neural architectures BIBREF2, BIBREF3 have been developed that substantially improved modeling of this target-context relationship. More recently, the Natural Language Processing community experienced a substantial shift towards using pre-trained language models BIBREF4, BIBREF5, BIBREF6, BIBREF7 as a base for many down-stream tasks, including ABSA BIBREF8, BIBREF9, BIBREF10. We still see huge potential that comes with this trend, this is why we approach the ATSC task using the BERT architecture.
As shown by BIBREF9, for the ATSC task the performance of models that were pre-trained on general text corpora is improved substantially by finetuning the model on domain-specific corpora — in their case review corpora — that have not been used for pre-training BERT, or other language models.
We extend the work by Xu et al. by further investigating the behavior of finetuning the BERT language model in relation to ATSC performance. In particular, our contributions are:
The analysis of the influence of the amount of training-steps used for BERT language model finetuning on the Aspect-Target Sentiment Classification performance.
The findings on how to exploit BERT language model finetuning enables us to achieve new state-of-the-art performance on the SemEval 2014 restaurants dataset.
The analysis of cross-domain adaptation between the laptops and restaurants domain. Adaptation is tested by finetuning the BERT language model self-supervised on the target-domain and then supervised training on the ATSC task in the source-domain. In addition, the performance of training on the combination of both datasets is measured.
Related Works
We separate our discussion of related work into two areas: First, neural methods applied to ATSC that have improved performance solely by model architecture improvements. Secondly, methods that additionally aim to transfer knowledge from semantically related tasks or domains.
Related Works ::: Architecture Improvements for Aspect-Target Sentiment Classification
The datasets typically used for Aspect-Target Sentiment Classification are the SemEval 2014 Task 4 datasets BIBREF1 for the restaurants and laptops domain. Unfortunately, both datasets only have a small number of training examples. One common approach to compensate for insufficient training examples is to invent neural architectures that better model ATSC. For example, in the past a big leap in classification performance was achieved with the use of the Memory Network architecture BIBREF3, which uses memory to remember context words and explicitly models attention over both the target word and context. It was found that making full use of context words improves their model compared to previous models BIBREF2 that make use of left- and right-sided context independently.
BIBREF8 proposed Attention Encoder Networks (AEN), a modification to the transformer architecture. The authors split the Multi-Head Attention (MHA) layers into Intra-MHA and Inter-MHA layers in order to model target words and context differently, which results in a more lightweight model compared to the transformer architecture.
Another recent performance leap was achieved by BIBREF11, who model dependencies between sentiment words explicitly in sentences with more than one aspect-target by using a graph convolutional neural network. They show that their architecture performs particularly well if multiple aspects are present in a sentence.
Related Works ::: Knowledge Transfer for Aspect-Target Sentiment Classification Analysis
Another approach to compensate for insufficient training examples is to transfer knowledge across domains or across similar tasks.
BIBREF12 proposed Multi-Granularity Alignment Networks (MGAN). They use this architecture to transfer knowledge from both an aspect-category classification task and also across different domains. They built a large scale aspect-category dataset specifically for this.
BIBREF13 transfer knowledge from a document-level sentiment classification task trained on the amazon review dataset BIBREF14. They successfully apply pre-training by reusing the weights of a Long Short Term Memory (LSTM) network BIBREF15 that has been trained on the document-level sentiment task. In addition, they apply multi-task learning where aspect and document-level tasks are learned simultaneously by minimizing a joint loss function.
Similarly, BIBREF9 introduce a multi-task loss function to simultaneously optimize the BERT model's BIBREF7 pre-training objectives as well as a question answering task.
In contrast to the methods described above that aim to transfer knowledge from a different source task like question answering or document-level sentiment classification, this paper aims at transferring knowledge across different domains by finetuning the BERT language model.
Methodology
We approach the Aspect-Target Sentiment Classification task using a two-step procedure. We use the pre-trained BERT architecture as a basis. In the first step we finetune the pre-trained weights of the language model further in a self-supervised way on a domain-specific corpus. In the second step we train the finetuned language model in a supervised way on the ATSC end-task.
In the following subsections, we discuss the BERT architecture, how we finetune the language model, and how we transform the ATSC task into a BERT sequence-pair classification task BIBREF10. Finally, we discuss the different end-task training and domain-specific finetuning combinations we employ to evaluate our model's generalization performance not only in-domain but also cross-domain.
Methodology ::: BERT
The BERT model builds on many previous innovations: contextualized word representations BIBREF4, the transformer architecture BIBREF16, and pre-training on a language modeling task with subsequent end-to-end finetuning on a downstream task BIBREF5, BIBREF6. Due to being deeply bidirectional, the BERT architecture creates very powerful sequence representations that perform extremely well on many downstream tasks BIBREF7.
The main innovation of BERT is that instead of using the objective of next-word prediction a different objective is used to train the language model. This objective consists of 2 parts.
The first part is the masked language model objective, where the model learns to predict tokens, which have been randomly masked, from the context.
The second part is the next-sequence prediction objective, where the model needs to predict if a sequence $B$ would naturally follow the previous sequence $A$. This objective enables the model to capture long-term dependencies better. Both objectives are discussed in more detail in the next section.
As a base for our experiments we use the BERTBASE model, which has been pre-trained by the Google research team. It has the following parameters: 12 layers, 768 hidden dimensions per token and 12 attention heads. It has 110 Mio. parameters in total.
For finetuning the BERT language model on a specific domain we use the weights of BERTBASE as a starting point.
Methodology ::: BERT Language Model Finetuning
As the first step of our procedure we perform language model finetuning of the BERT model using domain-specific corpora. Algorithmically, this is equivalent to pre-training. The domain-specific language model finetuning as an intermediate step to ATSC has been shown by BIBREF9. As an extension to their paper we investigate the limits of language model finetuning in terms of how end-task performance is dependent on the amount of training steps.
The training input representation for language model finetuning consists of two sequences $s_A$ and $s_B$ in the format of $"\textrm {[CLS]} \ s_{A} \ \textrm {[SEP]} \ s_{B} \ \textrm {[SEP]}"$, where [CLS] is a dummy token used for downstream classification and [SEP] are separator tokens.
Methodology ::: BERT Language Model Finetuning ::: Masked Language Model Objective
The sequences $A$ and $B$ have tokens randomly masked out in order for the model to learn to predict them. The following example shows why domain-specific finetuning can alleviate the bias from pre-training on a Wikipedia corpus: "The touchscreen is an [MASK] device". In the fact-based context of Wikipedia the [MASK] could be "input" and in the review domain a typical guess could be the general opinion word "amazing".
Methodology ::: BERT Language Model Finetuning ::: Next-Sentence Prediction
In order to train BERT to capture long-term dependencies better, the model is trained to predict if sequence $B$ follows sequence $A$. If this is the case, sequence A and sequence B are jointly sampled from the same document in the order they are occuring naturally. Otherwise the sequences are sampled randomly from the training corpus.
Methodology ::: Aspect-Target Sentiment Classification
The ATSC task aims at classifying sentiment polarity into the three classes positive, negative, neutral with respect to an aspect-target. The input to the classifier is a tokenized sentence $s=s_{1:n}$ and a target $t=s_{j:j+m}$ contained in the sentence, where $j < j+m \le n$. Similar to previous work by BIBREF10, we transform the input into a format compatible with BERT sequence-pair classification tasks: $"\textrm {[CLS]} \ s \ \textrm {[SEP]} \ t \ \textrm {[SEP]}"$.
In the BERT architecture the position of the token embeddings is structurally maintained after each Multi-Head Attention layer. Therefore, we refer to the last hidden representation of the [CLS] token as $h_{[CLS]} \in \mathbf {R}^{768 \times 1}$. The number of sentiment polarity classes is three. A distribution $p \in [0,1]^3$ over these classes is predicted using a fully-connected layer with 3 output neurons on top of $h_{[CLS]}$, followed by a softmax activation function
where $b \in \mathbf {R}^3$ and $W \in \mathbf {R}^{3 \times 768}$. Cross-entropy is used as the training loss. The way we use BERT for classifying the sentiment polaritites is equivalent to how BERT is used for sequence-pair classification tasks in the original paper BIBREF7.
Methodology ::: Domain Adaptation through Language Model Finetuning
In academia, it is common that the performance of a machine learning model is evaluated in-domain. This means that the model is evaluated on a test set that comes from the same distribution as the training set. In real-world applications this setting is not always valid, as the trained model is used to predict previously unseen data.
In order to evaluate the performance of a machine learning model more robustly, its generalization error can be evaluated across different domains, i.e. cross-domain. Additionally, the model itself can be adapted towards a target domain. This is known as Domain Adaptation, which is a special case of Transductive Transfer Learning in the taxonomy of BIBREF17. Here, it is typically assumed that supervised data for a specific task is only available for a source domain $S$, whereas only unsupervised data is available in the target domain $T$. The goal is to optimize performance of the task in the target domain while transferring task-specific knowledge from the source domain.
If we map this framework to our challenge, we define Aspect-Target Sentiment Classification as the transfer-task and BERT language model finetuning is used for domain adaptation. In terms of on which domain is finetuned on, the full transfer-procedure can be expressed in the following way:
Here, $D_{LM}$ stands for the domain on which the language model is finetuned and can take on the values of Restaurants, Laptops or (Restaurants $\cup $ Laptops). The domain for training $D_{Train}$ can take on the same values, for the joint case case the training datasets for laptops and restaurants are simply combined. The domain for testing $D_{Test}$ can only be take on the values Restaurants or Laptops.
Combining finetuning and training steps gives us nine different evaluation scenarios, which we group into the following four categories:
Methodology ::: In-Domain Training
ATSC is trained on a domain-specific dataset and evaluated on the test set from the same domain. This can be expressed as
$D_{LM} \rightarrow T \rightarrow T,$ where $T$ is our target domain and can be either Laptops or Restaurants. It is expected that the performance of the model is best if $D_{LM} = T$.
Methodology ::: Cross-Domain Training
ATSC is trained on a domain-specific dataset and evaluated on the test set from the other domain. This can be expressed as
$D_{LM} \rightarrow S \rightarrow T,$ where $S\ne T$ are source and target domain and can be either Laptops or Restaurants.
Methodology ::: Cross-Domain Adaptation
As a special case of cross-domain Training we expect performance to be optimal if $D_{LM} = T$. This is the variant of Domain Adaptation and is written as
$T \rightarrow S \rightarrow T.$
Methodology ::: Joint-Domain Training
ATSC is trained on both domain-specific datasets jointly and evaluated on both test sets independently. This can be expressed as
$D_{LM} \rightarrow (S \cup T) \rightarrow T,$ where $S\ne T$ are source- and target domain and can either be Laptops or Restaurants.
Experiments
In our experiments we aim to answer the following research questions (RQs):
RQ1: How does the number of training iterations in the BERT language model finetuning stage influence the ATSC end-task performance? At what point does performance start to improve, when does it converge?
RQ2: When trained in-domain, what ATSC end-task performance can be reached through fully exploitet finetuning of the BERT language model?
RQ3: When trained cross-domain in the special case of domain adaptation, what ATSC end-task performance can be reached if BERT language model finetuning is fully exploitet?
Experiments ::: Datasets for Classification and Language Model Finetuning
We conduct experiments using the two SemEval 2014 Task 4 Subtask 2 datasets BIBREF1 for the laptops and the restaurants domain. The two datasets contain sentences with multiple marked aspect terms that each have a 3-level sentiment polarity (positive, neutral or negative) associated. In the original dataset the conflict label is also present. Here, conflicting labels are dropped for reasons of comparability with BIBREF9. Both datasets are small, detailed statistics are shown in tab:datasets.
For BERT language model finetuning we prepare three corpora for the two domains of laptops and restaurants. For the restaurants domain we use Yelp Dataset Challenge reviews and for the laptops domain we use Amazon Laptop reviews BIBREF14. For the laptop domain we filtered out reviews that appear in the SemEval 2014 laptops dataset to avoid training bias for the test data. To be compatible with the next-sentence prediction task used during fine tuning, we removed reviews containing less than two sentences.
For the laptop corpus, $1,007,209$ sentences are left after pre-processing. For the restaurants domain more reviews are available, we sampled $10,000,000$ sentences to have a sufficient amount of data for fully exploitet language model finetuning. In order to compensate for the smaller amount of finetuning data in the laptops domain, we finetune for more epochs, 30 epochs in the case of the laptops domain compared to 3 epochs for the restaurants domain, so that the BERT model trains on about 30 million sentences in both cases. This means that 1 sentence can be seen multiple times with a different language model masking.
We also create a mixed corpus to jointly finetune both domains. Here, we sample 1 Mio. restaurant reviews and combine them with the laptop reviews. This results in about 2 Mio. reviews that are finetuned for 15 epochs. The exact statistics for the three finetuning corpora are shown in the top of tab:datasets.
To be able to reproduce our finetuning corpora, we make the code that is used to generate them available online.
Experiments ::: Hyperparameters
We use BERTBASE (uncased) as the base for all of our experiments, with the exception of XLNetBASE (cased), which is used as one of the baseline models.
For the BERT language model finetuning we use 32 bit floating point computations using the Adam optimizer BIBREF18. The batchsize is set to 32 while the learning rate is set to $3\cdot 10^{-5}$. The maximum input sequence length is set to 256 tokens, which amounts to about 4 sentences per sequence on average. As shown in tab:datasets, we finetune the language models on each domain so that the model trains a total of about 30 Mio. sentences (7.5 Mio. sequences).
For training the BERT and XLNet models on the down-stream task of ATSC we use mixed 16 bit and 32 bit floating point computations, the Adam optimizer, and a learning rate of $3\cdot 10^{-5}$ and a batchsize of 32. We train the model for a total of 7 epochs. The validation accuracy converges after about 3 epochs of training on all datasets, but training loss still improves after that.
It is important to note that all our results reported are the average of 9 runs with different random initializations. This is needed to measure significance of improvements, as the standard deviation in accuray amounts to roughly $1\%$ for all experiments, see fig:acc-dep-lmiterations.
Experiments ::: Compared Methods
We compare in-domain results to current state of the art methods, which we will now describe briefly.
SDGCN-BERT BIBREF11 explicitly models sentiment dependencies for sentences with multiple aspects with a graph convolutional network. This method is current state-of-the-art on the SemEval 2014 laptops dataset.
AEN-BERT BIBREF8 is an attentional encoder network. When used on top of BERT embeddings this method performs especially well on the laptops dataset.
BERT-SPC BIBREF8 is BERT used in sentence-pair classification mode. This is exactly the same method as our BERT-base baseline and therefore, we can cross-check the authors results.
BERT-PT BIBREF9 uses multi-task fine-tuning prior to downstream classification, where the BERT language model is finetuned jointly with a question answering task. It performs state-of-the-art on the restaurants dataset prior to this paper.
To our knowledge, cross- and joint-domain training on the SemEval 2014 Task 4 datasets has not been analyzed so far. Thus, we compare our method to two very strong baselines: BERT and XLNet.
BERT-base BIBREF7 is using the pre-trained BERTBASE embeddings directly on the down-stream task without any domain specific language model finetuning.
XLNet-base BIBREF19 is a method also based on general language model pre-training similar to BERT. Instead of randomly masking tokens for pre-training like in BERT a more general permutation objective is used, where all possible variants of masking are fully exploitet.
Our models are BERT models whose language model has been finetuned on different domain corpora.
BERT-ADA Lapt is the BERT language model finetuned on the laptops domain corpus.
BERT-ADA Rest is the BERT language model finetuned on the restaurant domain corpus.
BERT-ADA Joint is the BERT language model finetuned on the corpus containing an equal amount of laptops and restaurants reviews.
Experiments ::: Results Analysis
The results of our experiments are shown in fig:acc-dep-lmiterations and tab:results respectively.
To answer RQ1, which is concerned with details on domain-specific language model finetuning, we can see in fig:acc-dep-lmiterations that first of all, language model finetuning has a substantial effect on ATSC end-task performance. Secondly, we see that in the laptops domain the performance starts to increase at about 10 Mio. finetuned sentences. This is an interesting insight as one would expect a relation closer to a logarithmic curve. One reason might be that it takes many steps to train knowledge into the BERT language model due to its vast amount of parameters. The model already converges at around 17 Mio. sentences. More finetuning does not improve performance significantly. In addition, we find that different runs have a high variance, the standard deviation amounts to about $1\%$ in accuracy, which justifies averaging over 9 runs to measure differences in model performance reliably.
To answer RQ2, which is concerned with in-domain ATSC performance, we see in tab:results that for the in-domain training case, our models BERT-ADA Lapt and BERT-ADA Rest achieve performance close to state-of-the-art on the laptops dataset and new state-of-the-art on the restaurants dataset with accuracies of $79.19\%$ and $87.14\%$, respectively. On the restaurants dataset, this corresponds to an absolute improvement of $2.2\%$ compared to the previous state-of-the-art method BERT-PT. Language model finetuning produces a larger improvement on the restaurants dataset. We think that one reason for that might be that the restaurants domain is underrepresented in the pre-training corpora of BERTBASE. Generally, we find that language model finetuning helps even if the finetuning domain does not match the evaluation domain. We think the reason for this might be that the BERT-base model is pre-trained more on knowledge-based corpora like Wikipedia than on text containing opinions. Another finding is that BERT-ADA Joint performs better on the laptops dataset than BERT-ADA Rest, although the unique amount of laptop reviews are the same in laptops- and joint-corpora. We think that confusion can be created when mixing the domains, but this needs to be investigated further. We also find that the XLNet-base baseline performs generally stronger than BERT-base and even outperforms BERT-ADA Lapt with an accuracy of $79.89\%$ on the laptops dataset.
To answer RQ3, which is concerned with domain adaptation, we can see in the grayed out cells in tab:results, which correspond to the cross-domain adaption case where the BERT language model is trained on the target domain, that domain adaptation works well with $2.2\%$ absolute accuracy improvement on the laptops test set and even $3.6\%$ accuracy improvement on the restaurants test set compared to BERT-base.
In general, the ATSC task generalizes well cross-domain, with about 2-$3\%$ drop in accuracy compared to in-domain training. We think the reason for this might be that syntactical relationships between the aspect-target and the phrase expressing sentiment polarity as well as knowing the sentiment-polarity itself are sufficient to solve the ATSC task in many cases.
For the joint-training case, we find that combining both training datasets improves performance on both test sets. This result is intuitive, as more training data leads to better performance if the domains do not confuse each other. Interesting for the joint-training case is that the BERT-ADA Joint model performs especially strong when measured by the Macro-F1 metric. A reason for this might be that the SemEval 2014 datasets are imbalanced due to dominance of positive label. It seems like through finetuning the language model on both domains the model learns to classify the neutral class much better, especially in the laptops domain.
Conclusion
We performed experiments on the task of Aspect-Target Sentiment Classification by first finetuning a pre-trained BERT model on a domain specific corpus with subsequent training on the down-stream classification task.
We analyzed the behavior of the number of domain-specific BERT language model finetuning steps in relation to the end-task performance.
With the findings on how to best exploit BERT language model finetuning we were able to train high performing models, which one of even performs as new state-of-the-art on SemEval 2014 Task 4 restaurants dataset.
We further evaluated our models cross-domain to explore the robustness of Aspect-Target Sentiment Classification. We found that in general, this task transfers well between the laptops and the restaurants domain.
As a special case we ran a cross-domain adaptation experiments, where the BERT language model is specifically finetuned on the target domain. We achieve significant improvement over unadapted models, a cross-domain adapted model performs even better than a BERT-base model that is trained in-domain.
Overall, our findings reveal promising directions for follow-up work. The XLNet-base model performs strongly on the ATSC task. Here, domain-specific finetuning could probably bring significant performance improvements. Another interesting direction for future work would be to investigate cross-domain behavior for an additional domain like hotels, which is more similar to the restaurants domain. Here, it could be interesting to find out if the shared nature of these domain would results in more confusion or if they would behave synergetically. | results that for the in-domain training case, our models BERT-ADA Lapt and BERT-ADA Rest achieve performance close to state-of-the-art on the laptops dataset, new state-of-the-art on the restaurants dataset with accuracies of $79.19\%$ and $87.14\%$, respectively. |
521a7042b6308e721a7c8046be5084bc5e8ca246 | 521a7042b6308e721a7c8046be5084bc5e8ca246_0 | Q: What is a confusion network or lattice?
Text: Introduction
Recent years have seen an increased usage of spoken language technology in applications ranging from speech transcription BIBREF0 to personal assistants BIBREF1 . The quality of these applications heavily depends on the accuracy of the underlying automatic speech recognition (ASR) system yielding 1-best hypotheses and how well ASR errors are mitigated. The standard approach to ASR error mitigation is confidence scores BIBREF2 , BIBREF3 . A low confidence can give a signal to downstream applications about the high uncertainty of the ASR in its prediction and measures can be taken to mitigate the risk of making a wrong decision. However, confidence scores can also be used in upstream applications such as speaker adaptation BIBREF4 and semi-supervised training BIBREF5 , BIBREF6 to reflect uncertainty among multiple possible alternative hypotheses. Downstream applications, such as machine translation and information retrieval, could similarly benefit from using multiple hypotheses.
A range of confidence scores has been proposed in the literature BIBREF3 . In the simplest case, confidence scores are posterior probabilities that can be derived using approaches such as confusion networks BIBREF7 , BIBREF8 . These posteriors typically significantly over-estimate confidence BIBREF8 . Therefore, a number of approaches have been proposed to rectify this problem. These range from simple piece-wise linear mappings given by decision trees BIBREF8 to more complex sequence models such as conditional random fields BIBREF9 , and to neural networks BIBREF10 , BIBREF11 , BIBREF12 . Though improvements over posterior probabilities on 1-best hypotheses were reported, the impact of these approaches on all hypotheses available within confusion networks and lattices has not been investigated.
Extending confidence estimation to confusion network and lattice structures can be straightforward for some approaches, such as decision trees, and challenging for others, such as recurrent forms of neural networks. The previous work on encoding graph structures into neural networks BIBREF13 has mostly focused on embedding lattices into a fixed dimensional vector representation BIBREF14 , BIBREF15 . This paper examines a particular example of extending a bi-directional recurrent neural network (BiRNN) BIBREF16 to confusion network and lattice structures. This requires specifying how BiRNN states are propagated in the forward and backward directions, how to merge a variable number of BiRNN states, and how target confidence values are assigned to confusion network and lattice arcs. The paper shows that the state propagation in the forward and backward directions has close links to the standard forward-backward algorithm BIBREF17 . This paper proposes several approaches for merging BiRNN states, including an attention mechanism BIBREF18 . Finally, it describes a Levenshtein algorithm for assigning targets to confusion networks and an approximate solution for lattices. Combined these make it possible to assign confidence scores to every word hypothesised by the ASR, not just from a single extracted hypothesis.
The rest of this paper is organised as follows. Section "Bi-Directional Recurrent Neural Network" describes the use of bi-directional recurrent neural networks for confidence estimation in 1-best hypotheses. Section "Confusion Network and Lattice Extensions" describes the extension to confusion network and lattice structures. Experimental results are presented in Section "Experiments" . The conclusions drawn from this work are given in Section "Conclusions" .
Bi-Directional Recurrent Neural Network
Fig. 1 shows the simplest form of the BiRNN BIBREF16 . Unlike its uni-directional version, the BiRNN makes use of two recurrent states, one going in the forward direction in time $\overrightarrow{\mathbf {h}}_{t}$ and another in the backward direction $\overleftarrow{\mathbf {h}}_{t}$ to model past (history) and future information respectively.
The past information can be modelled by
$$\overrightarrow{\mathbf {h}}_{t} = \sigma (\mathbf { W}^{(\overrightarrow{{h}})}\overrightarrow{\mathbf {h}}_{t-1} + \mathbf { W}^{(x)}\mathbf {x}_{t})$$ (Eq. 4)
where $\mathbf {x}_{t}$ is an input feature vector at time $t$ , $\mathbf {W}^{(x)}$ is an input matrix, $\mathbf {W}^{(\overrightarrow{{h}})}$ is a history matrix and $\sigma $ is an element-wise non-linearity such as a sigmoid. The future information is typically modelled in the same way. At any time $t$ the confidence $c_t$ can be estimated by
$$c_{t} = \sigma (\mathbf {w}^{(c)^{\sf T}}{\bf h}_{t} + {b}^{(c)})$$ (Eq. 5)
where $\mathbf {w}^{c}$ and $b^{(b)}$ are a parameter vector and a bias, $\sigma $ is any non-linearity that maps confidence score into the range $[0,1]$ and $\mathbf {h}_{t}$ is a context vector that combines the past and future information.
$$\mathbf {h}_{t} = \begin{bmatrix}\overrightarrow{\bf h}_{t} & \overleftarrow{\bf h}_{t}\end{bmatrix}^{\sf T}$$ (Eq. 6)
The input features $\mathbf {x}_{t}$ play a fundamental role in the model's ability to assign accurate confidence scores. Numerous hand-crafted features have been proposed BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . In the simplest case, duration and word posterior probability can be used as input features. More complex features may include embeddings BIBREF23 , acoustic and language model scores and other information. The BiRNN can be trained by minimising the binary cross-entropy
$$H(\mathbf {c},\mathbf {c}^{*};\mathbf {\theta }) = -\dfrac{1}{T}\sum _{t=1}^{T} \Big \lbrace {c}_{t}^{*} \log (c_{t}) + (1 - {c}_{t}^{*}) \log (1 - c_{t})\Big \rbrace $$ (Eq. 7)
where $c_{t}$ is a predicted confidence score for time slot $t$ and $c_{t}^{*}$ is the associated reference value. The reference values can be obtained by aligning the 1-best ASR output and reference text using the Levenshtein algorithm. Note that deletion errors cannot be handled under this framework and need to be treated separately BIBREF22 , BIBREF12 . This form of BiRNN has been examined for confidence estimation in BIBREF11 , BIBREF12
The perfect confidence estimator would assign scores of one and zero to correctly and incorrectly hypothesised words respectively. In order to measure the accuracy of confidence predictions, a range of metrics have been proposed. Among these, normalised cross-entropy (NCE) is the most frequently used BIBREF24 . NCE measures the relative change in the binary cross-entropy when the empirical estimate of ASR correctness, $P_c$ , is replaced by predicted confidences $\mathbf {c}={c_1,\ldots ,c_T}$ . Using the definition of binary cross-entropy in Eqn. 7 , NCE can be expressed as
$$\text{NCE}(\mathbf {c},\mathbf {c^*}) = \dfrac{H(P_{c}\cdot \textbf {1},\mathbf {c^*}) - H(\mathbf {c},\mathbf {c^*})}{H(P_{c}\cdot \textbf {1},\mathbf {c^*})}$$ (Eq. 8)
where $\mathbf {1}$ is a length $T$ vector of ones, and the empirical estimate of ASR correctness is given by
$$P_{c} = \dfrac{1}{T}\sum _{t=1}^{T} {c}_{t}^{*}$$ (Eq. 9)
When hypothesised confidence scores $\mathbf {c}$ are systematically better than the estimate of ASR correctness $P_c$ , NCE is positive. In the limit of perfect confidence scores, NCE approaches one.
NCE alone is not always the most optimal metric for evaluating confidence estimators. This is because the theoretical limit of correct words being assigned a score of one and incorrect words a score of zero is not necessary for perfect operation of an upstream or downstream application. Often it is sufficient that the rank ordering of the predictions is such that all incorrect words fall below a certain threshold, and all correct words above. This is the case, for instance, in various information retrieval tasks BIBREF25 , BIBREF26 . A more suitable metric in such cases could be an area under a curve (AUC)-type metric. For balanced data the chosen curve is often the receiver operation characteristics (ROC). Whereas for imbalanced data, as is the case in this work, the precision-recall (PR) curve is normally used BIBREF27 . The PR curve is obtained by plotting precision versus recall
$$\text{Precision}(\theta ) = \dfrac{\text{TP}(\theta )}{\text{TP}(\theta )+\text{FP}(\theta )},\; \text{Recall}(\theta ) = \dfrac{\text{TP}(\theta )}{\text{TP}(\theta ) + \text{FN}(\theta )}$$ (Eq. 10)
for a range of thresholds $\theta $ , where TP are true positives, FP and FN are false positives and negatives. When evaluating performance on lattices and confusion networks, these metrics are computed across all arcs in the network.
Confusion Network and Lattice Extensions
A number of important downstream and upstream applications rely on accurate confidence scores in graph-like structures, such as confusion networks (CN) in Fig. 2 and lattices in Fig. 2 , where arcs connected by nodes represent hypothesised words. This section describes an extension of BiRNNs to CNs and lattices.
Fig. 2 shows that compared to 1-best sequences in Fig. 2 , each node in a CN may have multiple incoming arcs. Thus, a decision needs to be made on how to optimally propagate information to the outgoing arcs. Furthermore, any such approach would need to handle a variable number of incoming arcs. One popular approach BIBREF15 , BIBREF14 is to use a weighted combination
$$\overrightarrow{\mathbf {h}}_{t} = \sum _{i} \alpha _{t}^{(i)} \overrightarrow{\mathbf {h}}_{t}^{(i)}$$ (Eq. 14)
where $\overrightarrow{\mathbf {h}}_{t}^{(i)}$ represents the history information associated with the $i^{\text{th}}$ arc of the $t^{\text{th}}$ CN bin and $\alpha _{t}^{(i)}$ is the associated weight. A number of approaches can be used to set these weights. One simple approach is to set weights of all arcs other than the one with the highest posterior to zero. This yields a model that for 1-best hypotheses has no advantage over BiRNNs in Section "Bi-Directional Recurrent Neural Network" . Other simple approaches include average or normalised confidence score $\alpha _t^{(i)} = c_t^{(i)}/\sum _{j} c_t^{(j)}$ where $c_{t}^{(i)}$ is a word posterior probability, possibly mapped by decision trees. A more complex approach is an attention mechanism
$$\alpha _{t}^{(i)} = \dfrac{\exp (z_{t}^{(i)})}{\sum _{j} \exp (z_{t}^{(j)})}, \;\text{where } z_{t}^{(i)} = \sigma \left({\mathbf {w}^{(a)}}^{\sf {T}}\overrightarrow{\mathbf {k}}_{t}^{(i)} + b^{(a)}\right)$$ (Eq. 15)
where $\mathbf {w}^{(a)}$ and $b^{(a)}$ are attention parameters, $\overrightarrow{\mathbf {k}}_{t}^{(i)}$ is a key. The choice of the key is important as it helps the attention mechanism decide which information should be propagated. It is not obvious a priori what the key should contain. One option is to include arc history information as well as some basic confidence score statistics
$$\overrightarrow{\mathbf {k}}_{t}^{(i)} = \begin{bmatrix} \overrightarrow{\mathbf {h}}_{t}^{(i)^{\sf T}} & c_{t}^{(i)} & \mu _{t} & \sigma _{t} \end{bmatrix}^{\sf T}$$ (Eq. 16)
where $\mu _t$ and $\sigma _t$ are the mean and standard deviation computed over $c_t^{(i)}$ at time $t$ . At the next $(t+1)^{\text{th}}$ CN bin the forward information associated with the $i^{\text{th}}$ arc is updated by
$$\overrightarrow{\mathbf {h}}_{t+1}^{(i)} = \sigma (\mathbf { W}^{(\overrightarrow{{h}})}\overrightarrow{\mathbf {h}}_{t} + \mathbf { W}^{(x)}\mathbf {x}_{t+1}^{(i)})$$ (Eq. 17)
The confidence score for each CN arc is computed by
$$c_{t}^{(i)} = \sigma (\mathbf {w}^{(c)^{\sf T}}{\bf h}_{t}^{(i)} + {b}^{(c)})$$ (Eq. 18)
where ${\bf h}_{t}^{(i)}$ is an arc context vector
$${\bf h}_{t}^{(i)} = \begin{bmatrix} \overrightarrow{\mathbf {h}}_{t}^{(i)} & \overleftarrow{\mathbf {h}}_{t}^{(i)} \end{bmatrix}$$ (Eq. 19)
A summary of dependencies in this model is shown in Fig. 1 for a CN with 1 arc in the $t^{\text{th}}$ bin and 2 arcs in the $(t+1)^{\text{th}}$ bin.
As illustrated in Fig. 2 , each node in a lattice marks a timestamp in an utterance and each arc represents a hypothesised word with its corresponding acoustic and language model scores. Although lattices do not normally obey a linear graph structure, if they are traversed in the topological order, no changes are required to compute confidences over lattice structures. The way the information is propagated in these graph structures is similar to the forward-backward algorithm BIBREF17 . There, the forward probability at time $t$ is
$$\overrightarrow{h}_{t+1}^{(i)} = \overrightarrow{h}_{t} x_{t+1}^{(i)}, \;\text{where } \overrightarrow{h}_{t} = \sum _{j} \alpha _{i,j} \overrightarrow{h}_{t}^{(j)}$$ (Eq. 20)
Compared to equations Eqn. 14 and Eqn. 17 , the forward recursion employs a different way to combine features $x_{t+1}^{(i)}$ and node states $\overrightarrow{h}_{t}$ , and maintains stationary weights, i.e. the transition probabilities $\alpha _{i,j}$ , for combining arc states $\overrightarrow{h}_{t}^{(j)}$ . In addition, each $\overrightarrow{h}_{t}^{(i)}$ has a probabilistic meaning which the vector $\overrightarrow{\mathbf {h}}_{t}^{(i)}$ does not. Furthermore, unlike in the standard algorithm, the past information at the final node is not constrained to be equal to the future information at the initial node.
In order to train these models, each arc of a CN or lattice needs to be assigned an appropriate reference confidence value. For aligning a reference word sequence to another sequence, the Levenshtein algorithm can be used. The ROVER method has been used to iteratively align word sequences to a pivot reference sequence to construct CNs BIBREF28 . This approach can be extended to confusion network combination (CNC), which allows the merging of two CNs BIBREF29 . The reduced CNC alignment scheme proposed here uses a reference one-best sequence rather than a CN as the pivot, in order to tag CN arcs against a reference sequence. A soft loss of aligning reference word $\omega _\tau $ with the $t^{\text{th}}$ CN bin is used
$$\ell _{t}(\omega _{\tau }) = 1 - P_{t}(\omega _{\tau })$$ (Eq. 21)
where $P_t(\omega )$ is a word posterior probability distribution associated with the CN bin at time $t$ . The optimal alignment is then found by minimising the above loss.
The extension of the Levenshtein algorithm to lattices, though possible, is computationally expensive BIBREF30 . Therefore approximate schemes are normally used BIBREF31 . Common to those schemes is the use of information about the overlap of lattice arcs and time-aligned reference words to compute the loss
$$o_{t,\tau } = \max \bigg \lbrace 0,\frac{|\min \lbrace e_{\tau }^{*},e_{t}\rbrace | - |\max \lbrace s_{\tau }^{*},s_{t}\rbrace |}{|\max \lbrace e_{\tau }^{*},e_{t}\rbrace |-|\min \lbrace s_{\tau }^{*},s_{t}\rbrace |}\bigg \rbrace $$ (Eq. 22)
where $\lbrace s_t, e_t\rbrace $ and $\lbrace s^{*}_{\tau }, e^{*}_{\tau }\rbrace $ are start and end times of lattice arcs and time-aligned words respectively. In order to yield “hard” 0 or 1 loss a threshold can be set either on the loss or the amount of overlap.
Experiments
Evaluation was conducted on IARPA Babel Georgian full language pack (FLP). The FLP contains approximately 40 hours of conversational telephone speech (CTS) for training and 10 hours for development. The lexicon was obtained using the automatic approach described in BIBREF32 . The automatic speech recognition (ASR) system combines 4 diverse acoustic models in a single recognition run BIBREF33 . The diversity is obtained through the use of different model types, a tandem and a hybrid, and features, multi-lingual bottlenecks extracted by IBM and RWTH Aachen from 28 languages. The language model is a simple $n$ -gram estimated on acoustic transcripts and web data. As a part of a larger consortium, this ASR system took part in the IARPA OpenKWS 2016 competition BIBREF34 . The development data was used to assess the accuracy of confidence estimation approaches. The data was split with a ratio of $8:1:1$ into training, validation and test sets. The ASR system was used to produce lattices. Confusion networks were obtained from lattices using consensus decoding BIBREF7 . The word error rates of the 1-best sequences are 39.9% for lattices and 38.5% for confusion networks.
The input features for the standard bi-directional recurrent neural network (BiRNN) and CN-based (BiCNRNN) are decision tree mapped posterior, duration and a 50-dimensional fastText word embedding BIBREF35 estimated from web data. The lattice-based BiRNN (BiLatRNN) makes additional use of acoustic and language model scores. All forms of BiRNNs contain one $[\overrightarrow{128},\overleftarrow{128}]$ dimensional bi-directional LSTM layer and one 128 dimensional feed-forward hidden layer. The implementation uses PyTorch library and is available online. For efficient training, model parameters are updated using Hogwild! stochastic gradient descent BIBREF36 , which allows asynchronous update on multiple CPU cores in parallel.
Table 1 shows the NCE and AUC performance of confidence estimation schemes on 1-best hypotheses extracted from CNs. As expected, “raw” posterior probabilities yield poor NCE results although AUC performance is high. The decision tree, as expected, improves NCE and does not affect AUC due to the monotonicity of the mapping. The BiRNN yields gains over the simple decision tree, which is consistent with the previous work in the area BIBREF11 , BIBREF12 .
The next experiment examines the extension of BiRNNs to confusion networks. The BiCNRNN uses a similar model topology, merges incoming arcs using the attention mechanism described in Section "Confusion Network and Lattice Extensions" and uses the Levenshtein algorithm with loss given by Eqn. 21 to obtain reference confidence values. The model parameters are estimated by minimising average binary cross-entropy loss on all CN arcs. The performance is evaluated over all CN arcs. When transitioning from 1-best arcs to all CN arcs the AUC performance is expected to drop due to an increase in the Bayes risk. Table 2 shows that BiCNRNN yields gains similar to BiRNN in Table 1 .
As mentioned in Section "Confusion Network and Lattice Extensions" there are alternatives to attention for merging incoming arcs. Table 3 shows that mean and normalised posterior weights may provide a competitive alternative.
Extending BiRNNs to lattices requires making a choice of a loss function and a method of setting reference values to lattice arcs. A simple global threshold on the amount of overlap between reference time-aligned words and lattice arcs is adopted to tag arcs. This scheme yields a false negative rate of 2.2% and false positive rate of 0.9% on 1-best CN arcs and 1.4% and 0.7% on 1-best lattice arcs. Table 4 shows the impact of using approximate loss in training the BiCNRNN. The results suggest that the mismatch between training and testing criteria, i.e. approximate in training and Levenshtein in testing, could play a significant role on BiLatRNN performance. Using this approximate scheme, a BiLatRNN was trained on lattices.
Table 5 compares BiLatRNN performance to “raw” posteriors and decision trees. As expected, lower AUC performances are observed due to higher Bayes risk in lattices compared to CNs. The “raw” posteriors offer poor confidence estimates as can be seen from the large negative NCE and low AUC. The decision tree yields significant gains in NCE and no change in AUC performance. Note that the AUC for a random classifier on this data is 0.2466. The BiLatRNN yields very large gains in both NCE and AUC performance.
As mentioned in Section "Introduction" , applications such as language learning and information retrieval rely on confidence scores to give high-precision feedback BIBREF37 or high-recall retrieval BIBREF25 , BIBREF26 . Therefore, Fig. 3 shows precision-recall curves for BiRNN in Table 1 and BiLatRNN in Table 5 . Fig. 3 shows that the BiRNN yields largest gain in the region of high precision and low recall which is useful for feedback-like applications. Whereas the BiLatRNN in Fig. 3 can be seen to significantly improve precision in the high recall region, which is useful for some retrieval tasks.
Conclusions
Confidence scores play an important role in many applications of spoken language technology. The standard form of confidence scores are decision tree mapped word posterior probabilities. A number of approaches have been proposed to improve confidence estimation, such as bi-directional recurrent neural networks (BiRNN). BiRNNs, however, can predict confidences of sequences only, which limits their more general application to 1-best hypotheses. This paper extends BiRNNs to confusion network (CN) and lattice structures. In particular, it proposes to use an attention mechanism to combine variable number of incoming arcs, shows how recursions are linked to the standard forward-backward algorithm and describes how to tag CN and lattice arcs with reference confidence values. Experiments were performed on a challenging limited resource IARPA Babel Georgian pack and shows that the extended forms of BiRNNs yield significant gains in confidence estimation accuracy over all arcs in CNs and lattices. Many related applications like information retrieval, speaker adaptation, keyword spotting and semi-supervised training will benefit from the improved confidence measure. | graph-like structures where arcs connect nodes representing multiple hypothesized words, thus allowing multiple incoming arcs unlike 1-best sequences |
06776b8dfd1fe27b5376ae44436b367a71ff9912 | 06776b8dfd1fe27b5376ae44436b367a71ff9912_0 | Q: What dataset is used for training?
Text: Introduction
Tonal languages use pitch to distinguish different words, for example, yi in Mandarin may mean `one', `to move', `already', or `art', depending on the pitch contour. Of over 6000 languages in the world, it is estimated that as many as 60-70% are tonal BIBREF0, BIBREF1. A few of these are national languages (e.g., Mandarin Chinese, Vietnamese, and Thai), but many tonal languages have a small number of speakers and are scarcely documented. There is a limited availability of trained linguists to perform language documentation before these languages become extinct, hence the need for better tools to assist linguists in these tasks.
One of the first tasks during the description of an unfamiliar language is determining its phonemic inventory: what are the consonants, vowels, and tones of the language, and which pairs of phonemes are contrastive? Tone presents a unique challenge because unlike consonants and vowels, which can be identified in isolation, tones do not have a fixed pitch, and vary by speaker and situation. Since tone data is subject to interpretation, different linguists may produce different descriptions of the tone system of the same language BIBREF1.
In this work, we present a model to automatically infer phonemic tone categories of a tonal language. We use an unsupervised representation learning and clustering approach, which requires only a set of spoken words in the target language, and produces clusters of syllables that probably have the same tone. We apply our method on Mandarin Chinese and Cantonese datasets, for which the ground truth annotation is used for evaluation. Our method does not make any language-specific assumptions, so it may be applied to low-resource languages whose phonemic inventories are not already established.
Introduction ::: Tone in Mandarin and Cantonese
Mandarin Chinese (1.1 billion speakers) and Cantonese (74 million speakers) are two tonal languages in the Sinitic family BIBREF0. Mandarin has four lexical tones: high (55), rising (25), low-dipping (214), and falling (51). The third tone sometimes undergoes sandhi, addressed in section SECREF3. We exclude a fifth, neutral tone, which can only occur in word-final positions and has no fixed pitch.
Cantonese has six lexical tones: high-level (55), mid-rising (25), mid-level (33), low-falling (21), low-rising (23), and low-level (22). Some descriptions of Cantonese include nine tones, of which three are checked tones that are flat, shorter in duration, and only occur on syllables ending in /p/, /t/, or /k/. Since each one of the checked tones are in complementary distribution with an unchecked tone, we adopt the simpler six tone model that treats the checked tones as variants of the high, mid, and low level tones. Contours for the lexical tones in both languages are shown in Figure FIGREF2.
Related Work
Many low-resource languages lack sufficient transcribed data for supervised speech processing, thus unsupervised models for speech processing is an emerging area of research. The Zerospeech 2015 and 2017 challenges featured unsupervised learning of contrasting phonemes in English and Xitsonga, evaluated by an ABX phoneme discrimination task BIBREF3. One successful approach used denoising and correspondence autoencoders to learn a representation that avoided capturing noise and irrelevant inter-speaker variation BIBREF4. Deep LSTMs for segmenting and clustering phonemes in speech have also been explored in BIBREF5 and BIBREF6.
In Mandarin Chinese, deep neural networks have been successful for tone classification in isolated syllables BIBREF7 as well as in continuous speech BIBREF8, BIBREF9. Both of these models found that Mel-frequency cepstral coefficients (MFCCs) outperformed pitch contour features, despite the fact that MFCC features do not contain pitch information. In Cantonese, support vector machines (SVMs) have been applied to classify tones in continuous speech, using pitch contours as input BIBREF10.
Unsupervised learning of tones remains largely unexplored. Levow BIBREF11 performed unsupervised and semi-supervised tone clustering in Mandarin, using average pitch and slope as features, and $k$-means and asymmetric $k$-lines for clustering. Graph-based community detection techniques have been applied to group $n$-grams of contiguous contours into clusters in Mandarin BIBREF12. Our work appears to be the first model to use unsupervised deep neural networks for phonemic tone clustering.
Data and Preprocessing
We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The Mandarin dataset is from a female speaker and is provided by Shtooka, and the Cantonese dataset is from a male speaker and is downloaded from Forvo, an online crowd-sourced pronunciation dictionary. We require all samples within each language to be from the same speaker to avoid the difficulties associated with channel effects and inter-speaker variation. We randomly sample 400 words from each language, which are mostly between 2 and 4 syllables; to reduce the prosody effects with longer utterances, we exclude words longer than 4 syllables.
We extract ground-truth tones for evaluation purposes. In Mandarin, the tones are extracted from the pinyin transcription; in Cantonese, we reference the character entries on Wiktionary to retrieve the romanized pronunciation and tones. For Mandarin, we correct for third-tone sandhi (a phonological rule where a pair of consecutive third-tones is always realized as a second-tone followed by a third-tone). We also exclude the neutral tone, which has no fixed pitch and is sometimes thought of as a lack of tone.
Data and Preprocessing ::: Pitch extraction and syllable segmentation
We use Praat's autocorrelation-based pitch estimation algorithm to extract the fundamental frequency (F0) contour for each sample, using a minimum frequency of 75Hz and a maximum frequency of 500Hz BIBREF13. The interface between Python and Praat is handled using Parselmouth BIBREF14. We normalize the contour to be between 0 and 1, based on the speaker's pitch range.
Next, we segment each speech sample into syllables, which is necessary because syllable boundaries are not provided in our datasets. This is done using a simple heuristic that detects continuously voiced segments, and manual annotation where the heuristic fails. To obtain a constant length pitch contour as input to our model, we sample the pitch at 40 equally spaced points. Note that by sampling a variable length contour to a constant length, information about syllable length is lost; this is acceptable because we consider tones which differ on length as variations of the same tone.
Model ::: Convolutional autoencoder
We use a convolutional autoencoder (Figure FIGREF4) to learn a two-dimensional latent vector for each syllable. Convolutional layers are widely used in computer vision and speech processing to learn spatially local features that are invariant of position. We use a low dimensional latent space so that the model learns to generate a representation that only captures the most important aspects of the input contour, and also because clustering algorithms tend to perform poorly in high dimensional spaces.
Our encoder consists of three layers. The first layer applies 2 convolutional filters (kernel size 4, stride 1) followed by max pooling (kernel size 2) and a tanh activation. The second layer applies 4 convolutional filters (kernel size 4, stride 1), again with max pooling (kernel size 2) and a tanh activation. The third layer is a fully connected layer with two dimensional output. Our decoder is the encoder in reverse, consisting of one fully connected layer and two deconvolution layers, with the same layer shapes as the encoder.
We train the autoencoder using PyTorch BIBREF15, for 500 epochs, with a batch size of 60. The model is optimized using Adam BIBREF16 with a learning rate of 5e-4 to minimize the mean squared error between the input and output contours.
Model ::: Mean shift clustering
We run the encoder on each syllable's pitch contour to get their latent representations; we apply principal component analysis (PCA) to remove any correlation between the two dimensions. Then, we run mean shift clustering BIBREF17, BIBREF18, estimating a probability density function in the latent space. The procedure performs gradient ascent on all the points until they converge to a set of stationary points, which are local maxima of the density function. These stationary points are taken to be cluster centers, and points that converge to the same stationary point belong to the same cluster.
Unlike $k$-means clustering, the mean shift procedure does not require the number of clusters to be specified, only a bandwidth parameter (set to 0.6 for our experiments). The cluster centers are always in regions of high density, so they can be viewed as prototypes that represent their respective clusters. Another advantage is that unlike $k$-means, mean shift clustering is robust to outliers.
Although the mean shift procedure technically assigns every point to a cluster, not all such clusters are linguistically plausible as phonemic tones, because they contain very few points. Thus, we take only clusters larger than a threshold, determined empirically from the distribution of cluster sizes; the rest are considered spurious clusters and we treat them as unclustered. Finally, we feed the remaining cluster centers into the decoder to generate a prototype pitch contour for each cluster.
Results
Figure FIGREF9 shows the latent space learned by the autoencoders and the clustering output. Our model found 4 tone clusters in Mandarin, matching the number of phonemic tones (Table TABREF12) and 5 in Cantonese, which is one fewer than the number of phonemic tones (Table TABREF13). In Mandarin, the 4 clusters correspond very well with the the 4 phonemic tone categories, and the generated contours closely match the ground truth in Figure FIGREF2. There is some overlap between tones 3 and 4; this is because tone 3 is sometimes realized a low-falling tone without the final rise, a process known as half T3 sandhi BIBREF19, thus, it may overlap with tone 4 (falling tone).
In Cantonese, the 5 clusters A-E correspond to low-falling, mid-level, high-level, mid-rising, and low-rising tones. Tone clustering in Cantonese is expected to be more difficult than in Mandarin because of 6 contrastive tones, rather than 4. The model is more effective at clustering the higher tones (1, 2, 3), and less effective at clustering the lower tones (4, 5, 6), particularly tone 4 (low-falling) and tone 6 (low-level). This confirms the difficulties in prior work, which reported worse classification accuracy on the lower-pitched tones because the lower region of the Cantonese tone space is more crowded than the upper region BIBREF10.
Two other sources of error are carry-over and declination effects. A carry-over effect is when the pitch contour of a tone undergoes contextual variation depending on the preceding tone; strong carry-over effects have been observed in Mandarin BIBREF20. Prior work BIBREF11 avoided carry-over effects by using only the second half of every syllable, but we do not consider language-specific heuristics in our model. Declination is a phenomenon in which the pitch declines over an utterance BIBREF1, BIBREF10. This is especially a problem in Cantonese, which has tones that differ only on pitch level and not contour: for example, a mid-level tone near the end of a phrase may have the same absolute pitch as a low-level tone at the start of a phrase.
To test this hypothesis, we evaluate the model on only the first syllable of every word, which eliminates carry-over and declination effects (Table TABREF14). In both Mandarin and Cantonese, the clustering is more accurate when using only the first syllables, compared to using all of the syllables.
Conclusions and future work
We propose a model for unsupervised clustering and discovery of phonemic tones in tonal languages, using spoken words as input. Our model extracts the F0 pitch contour, trains a convolutional autoencoder to learn a low-dimensional representation for each contour, and applies mean shift clustering to the resulting latent space. We obtain promising results with both Mandarin Chinese and Cantonese, using only 400 spoken words from each language. Cantonese presents more difficulties because of its larger number of tones, especially at the lower half of the pitch range, and also due to multiple contrastive level tones. Finally, we briefly explore the influence of contextual variation on our model.
A limitation of this study is that our model only considers pitch, which is only one aspect of tone. In reality, pitch is determined not only by tone, but by a complex mixture of intonation, stress, and other prosody effects. Tone is not a purely phonetic property – it is impossible to determine on a phonetic basis whether two pitch contours have distinct underlying tones, or are variants of the same underlying tone (perhaps in complementary distribution). Instead, two phonemic tones can be shown to be contrastive only by providing a minimal pair, where two semantically different lexical items are identical in every respect other than their tones. The last problem is not unique to tone: similar difficulties have been noted when attempting to identify consonant and vowel phonemes automatically BIBREF21. In future work, we plan to further explore these issues and develop more nuanced models to learn tone from speech.
Acknowledgments
We thank Prof Gerald Penn for his help suggestions during this project. Rudzicz is a CIFAR Chair in AI. | Mandarin dataset, Cantonese dataset |
f1831b2e96ff8ef65b8fde8b4c2ee3e04b7ac4bf | f1831b2e96ff8ef65b8fde8b4c2ee3e04b7ac4bf_0 | Q: How close do clusters match to ground truth tone categories?
Text: Introduction
Tonal languages use pitch to distinguish different words, for example, yi in Mandarin may mean `one', `to move', `already', or `art', depending on the pitch contour. Of over 6000 languages in the world, it is estimated that as many as 60-70% are tonal BIBREF0, BIBREF1. A few of these are national languages (e.g., Mandarin Chinese, Vietnamese, and Thai), but many tonal languages have a small number of speakers and are scarcely documented. There is a limited availability of trained linguists to perform language documentation before these languages become extinct, hence the need for better tools to assist linguists in these tasks.
One of the first tasks during the description of an unfamiliar language is determining its phonemic inventory: what are the consonants, vowels, and tones of the language, and which pairs of phonemes are contrastive? Tone presents a unique challenge because unlike consonants and vowels, which can be identified in isolation, tones do not have a fixed pitch, and vary by speaker and situation. Since tone data is subject to interpretation, different linguists may produce different descriptions of the tone system of the same language BIBREF1.
In this work, we present a model to automatically infer phonemic tone categories of a tonal language. We use an unsupervised representation learning and clustering approach, which requires only a set of spoken words in the target language, and produces clusters of syllables that probably have the same tone. We apply our method on Mandarin Chinese and Cantonese datasets, for which the ground truth annotation is used for evaluation. Our method does not make any language-specific assumptions, so it may be applied to low-resource languages whose phonemic inventories are not already established.
Introduction ::: Tone in Mandarin and Cantonese
Mandarin Chinese (1.1 billion speakers) and Cantonese (74 million speakers) are two tonal languages in the Sinitic family BIBREF0. Mandarin has four lexical tones: high (55), rising (25), low-dipping (214), and falling (51). The third tone sometimes undergoes sandhi, addressed in section SECREF3. We exclude a fifth, neutral tone, which can only occur in word-final positions and has no fixed pitch.
Cantonese has six lexical tones: high-level (55), mid-rising (25), mid-level (33), low-falling (21), low-rising (23), and low-level (22). Some descriptions of Cantonese include nine tones, of which three are checked tones that are flat, shorter in duration, and only occur on syllables ending in /p/, /t/, or /k/. Since each one of the checked tones are in complementary distribution with an unchecked tone, we adopt the simpler six tone model that treats the checked tones as variants of the high, mid, and low level tones. Contours for the lexical tones in both languages are shown in Figure FIGREF2.
Related Work
Many low-resource languages lack sufficient transcribed data for supervised speech processing, thus unsupervised models for speech processing is an emerging area of research. The Zerospeech 2015 and 2017 challenges featured unsupervised learning of contrasting phonemes in English and Xitsonga, evaluated by an ABX phoneme discrimination task BIBREF3. One successful approach used denoising and correspondence autoencoders to learn a representation that avoided capturing noise and irrelevant inter-speaker variation BIBREF4. Deep LSTMs for segmenting and clustering phonemes in speech have also been explored in BIBREF5 and BIBREF6.
In Mandarin Chinese, deep neural networks have been successful for tone classification in isolated syllables BIBREF7 as well as in continuous speech BIBREF8, BIBREF9. Both of these models found that Mel-frequency cepstral coefficients (MFCCs) outperformed pitch contour features, despite the fact that MFCC features do not contain pitch information. In Cantonese, support vector machines (SVMs) have been applied to classify tones in continuous speech, using pitch contours as input BIBREF10.
Unsupervised learning of tones remains largely unexplored. Levow BIBREF11 performed unsupervised and semi-supervised tone clustering in Mandarin, using average pitch and slope as features, and $k$-means and asymmetric $k$-lines for clustering. Graph-based community detection techniques have been applied to group $n$-grams of contiguous contours into clusters in Mandarin BIBREF12. Our work appears to be the first model to use unsupervised deep neural networks for phonemic tone clustering.
Data and Preprocessing
We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The Mandarin dataset is from a female speaker and is provided by Shtooka, and the Cantonese dataset is from a male speaker and is downloaded from Forvo, an online crowd-sourced pronunciation dictionary. We require all samples within each language to be from the same speaker to avoid the difficulties associated with channel effects and inter-speaker variation. We randomly sample 400 words from each language, which are mostly between 2 and 4 syllables; to reduce the prosody effects with longer utterances, we exclude words longer than 4 syllables.
We extract ground-truth tones for evaluation purposes. In Mandarin, the tones are extracted from the pinyin transcription; in Cantonese, we reference the character entries on Wiktionary to retrieve the romanized pronunciation and tones. For Mandarin, we correct for third-tone sandhi (a phonological rule where a pair of consecutive third-tones is always realized as a second-tone followed by a third-tone). We also exclude the neutral tone, which has no fixed pitch and is sometimes thought of as a lack of tone.
Data and Preprocessing ::: Pitch extraction and syllable segmentation
We use Praat's autocorrelation-based pitch estimation algorithm to extract the fundamental frequency (F0) contour for each sample, using a minimum frequency of 75Hz and a maximum frequency of 500Hz BIBREF13. The interface between Python and Praat is handled using Parselmouth BIBREF14. We normalize the contour to be between 0 and 1, based on the speaker's pitch range.
Next, we segment each speech sample into syllables, which is necessary because syllable boundaries are not provided in our datasets. This is done using a simple heuristic that detects continuously voiced segments, and manual annotation where the heuristic fails. To obtain a constant length pitch contour as input to our model, we sample the pitch at 40 equally spaced points. Note that by sampling a variable length contour to a constant length, information about syllable length is lost; this is acceptable because we consider tones which differ on length as variations of the same tone.
Model ::: Convolutional autoencoder
We use a convolutional autoencoder (Figure FIGREF4) to learn a two-dimensional latent vector for each syllable. Convolutional layers are widely used in computer vision and speech processing to learn spatially local features that are invariant of position. We use a low dimensional latent space so that the model learns to generate a representation that only captures the most important aspects of the input contour, and also because clustering algorithms tend to perform poorly in high dimensional spaces.
Our encoder consists of three layers. The first layer applies 2 convolutional filters (kernel size 4, stride 1) followed by max pooling (kernel size 2) and a tanh activation. The second layer applies 4 convolutional filters (kernel size 4, stride 1), again with max pooling (kernel size 2) and a tanh activation. The third layer is a fully connected layer with two dimensional output. Our decoder is the encoder in reverse, consisting of one fully connected layer and two deconvolution layers, with the same layer shapes as the encoder.
We train the autoencoder using PyTorch BIBREF15, for 500 epochs, with a batch size of 60. The model is optimized using Adam BIBREF16 with a learning rate of 5e-4 to minimize the mean squared error between the input and output contours.
Model ::: Mean shift clustering
We run the encoder on each syllable's pitch contour to get their latent representations; we apply principal component analysis (PCA) to remove any correlation between the two dimensions. Then, we run mean shift clustering BIBREF17, BIBREF18, estimating a probability density function in the latent space. The procedure performs gradient ascent on all the points until they converge to a set of stationary points, which are local maxima of the density function. These stationary points are taken to be cluster centers, and points that converge to the same stationary point belong to the same cluster.
Unlike $k$-means clustering, the mean shift procedure does not require the number of clusters to be specified, only a bandwidth parameter (set to 0.6 for our experiments). The cluster centers are always in regions of high density, so they can be viewed as prototypes that represent their respective clusters. Another advantage is that unlike $k$-means, mean shift clustering is robust to outliers.
Although the mean shift procedure technically assigns every point to a cluster, not all such clusters are linguistically plausible as phonemic tones, because they contain very few points. Thus, we take only clusters larger than a threshold, determined empirically from the distribution of cluster sizes; the rest are considered spurious clusters and we treat them as unclustered. Finally, we feed the remaining cluster centers into the decoder to generate a prototype pitch contour for each cluster.
Results
Figure FIGREF9 shows the latent space learned by the autoencoders and the clustering output. Our model found 4 tone clusters in Mandarin, matching the number of phonemic tones (Table TABREF12) and 5 in Cantonese, which is one fewer than the number of phonemic tones (Table TABREF13). In Mandarin, the 4 clusters correspond very well with the the 4 phonemic tone categories, and the generated contours closely match the ground truth in Figure FIGREF2. There is some overlap between tones 3 and 4; this is because tone 3 is sometimes realized a low-falling tone without the final rise, a process known as half T3 sandhi BIBREF19, thus, it may overlap with tone 4 (falling tone).
In Cantonese, the 5 clusters A-E correspond to low-falling, mid-level, high-level, mid-rising, and low-rising tones. Tone clustering in Cantonese is expected to be more difficult than in Mandarin because of 6 contrastive tones, rather than 4. The model is more effective at clustering the higher tones (1, 2, 3), and less effective at clustering the lower tones (4, 5, 6), particularly tone 4 (low-falling) and tone 6 (low-level). This confirms the difficulties in prior work, which reported worse classification accuracy on the lower-pitched tones because the lower region of the Cantonese tone space is more crowded than the upper region BIBREF10.
Two other sources of error are carry-over and declination effects. A carry-over effect is when the pitch contour of a tone undergoes contextual variation depending on the preceding tone; strong carry-over effects have been observed in Mandarin BIBREF20. Prior work BIBREF11 avoided carry-over effects by using only the second half of every syllable, but we do not consider language-specific heuristics in our model. Declination is a phenomenon in which the pitch declines over an utterance BIBREF1, BIBREF10. This is especially a problem in Cantonese, which has tones that differ only on pitch level and not contour: for example, a mid-level tone near the end of a phrase may have the same absolute pitch as a low-level tone at the start of a phrase.
To test this hypothesis, we evaluate the model on only the first syllable of every word, which eliminates carry-over and declination effects (Table TABREF14). In both Mandarin and Cantonese, the clustering is more accurate when using only the first syllables, compared to using all of the syllables.
Conclusions and future work
We propose a model for unsupervised clustering and discovery of phonemic tones in tonal languages, using spoken words as input. Our model extracts the F0 pitch contour, trains a convolutional autoencoder to learn a low-dimensional representation for each contour, and applies mean shift clustering to the resulting latent space. We obtain promising results with both Mandarin Chinese and Cantonese, using only 400 spoken words from each language. Cantonese presents more difficulties because of its larger number of tones, especially at the lower half of the pitch range, and also due to multiple contrastive level tones. Finally, we briefly explore the influence of contextual variation on our model.
A limitation of this study is that our model only considers pitch, which is only one aspect of tone. In reality, pitch is determined not only by tone, but by a complex mixture of intonation, stress, and other prosody effects. Tone is not a purely phonetic property – it is impossible to determine on a phonetic basis whether two pitch contours have distinct underlying tones, or are variants of the same underlying tone (perhaps in complementary distribution). Instead, two phonemic tones can be shown to be contrastive only by providing a minimal pair, where two semantically different lexical items are identical in every respect other than their tones. The last problem is not unique to tone: similar difficulties have been noted when attempting to identify consonant and vowel phonemes automatically BIBREF21. In future work, we plan to further explore these issues and develop more nuanced models to learn tone from speech.
Acknowledgments
We thank Prof Gerald Penn for his help suggestions during this project. Rudzicz is a CIFAR Chair in AI. | NMI between cluster assignments and ground truth tones for all sylables is:
Mandarin: 0.641
Cantonese: 0.464 |
20ec88c45c1d633adfd7bff7bbf3336d01fb6f37 | 20ec88c45c1d633adfd7bff7bbf3336d01fb6f37_0 | Q: what are the evaluation metrics?
Text: Introduction
A named entity can be mentioned using a great variety of surface forms (Barack Obama, President Obama, Mr. Obama, B. Obama, etc.) and the same surface form can refer to a variety of named entities. For example, according to the English Wikipedia, the form `Europe' can ambiguously be used to refer to 18 different entities, including the continent, the European Union, various Greek mythological entities, a rock band, some music albums, a magazine, a short story, etc. Furthermore, it is possible to refer to a named entity by means of anaphoric pronouns and co-referent expressions such as `he', `her', `their', `I', `the 35 year old', etc. Therefore, in order to provide an adequate and comprehensive account of named entities in text it is necessary to recognize the mention of a named entity and to classify it by a pre-defined type (e.g, person, location, organization). Named Entity Recognition and Classification (NERC) is usually a required step to perform Named Entity Disambiguation (NED), namely to link `Europe' to the right Wikipedia article, and to resolve every form of mentioning or co-referring to the same entity.
Nowadays NERC systems are widely being used in research for tasks such as Coreference Resolution BIBREF0 , Named Entity Disambiguation BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 for which a lot of interest has been created by the TAC KBP shared tasks BIBREF6 , Machine Translation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , Aspect Based Sentiment Analysis BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , Event Extraction BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 and Event Ordering BIBREF20 .
Moreover, NERC systems are integrated in the processing chain of many industrial software applications, mostly by companies offering specific solutions for a particular industrial sector which require recognizing named entities specific of their domain. There is therefore a clear interest in both academic research and industry to develop robust and efficient NERC systems: For industrial vendors it is particularly important to diversify their services by including NLP technology for a variety of languages whereas in academic research NERC is one of the foundations of many other NLP end-tasks.
Most NERC taggers are supervised statistical systems that extract patterns and term features which are considered to be indications of Named Entity (NE) types using the manually annotated training data (extracting orthographic, linguistic and other types of evidence) and often external knowledge resources. As in other NLP tasks, supervised statistical NERC systems are more robust and obtain better performance on available evaluation sets, although sometimes the statistical models can also be combined with specific rules for some NE types. For best performance, supervised statistical approaches require manually annotated training data, which is both expensive and time-consuming. This has seriously hindered the development of robust high performing NERC systems for many languages but also for other domains and text genres BIBREF21 , BIBREF22 , in what we will henceforth call `out-of-domain' evaluations.
Moreover, supervised NERC systems often require fine-tuning for each language and, as some of the features require language-specific knowledge, this poses yet an extra complication for the development of robust multilingual NERC systems. For example, it is well-known that in German every noun is capitalized and that compounds including named entities are pervasive. This also applies to agglutinative languages such as Basque, Korean, Finnish, Japanese, Hungarian or Turkish. For this type of languages, it had usually been assumed that linguistic features (typically Part of Speech (POS) and lemmas, but also semantic features based on WordNet, for example) and perhaps specific hand-crafted rules, were a necessary condition for good NERC performance as they would allow to capture better the most recurrent declensions (cases) of named entities for Basque BIBREF23 or to address problems such as sparsity and capitalization of every noun for German BIBREF24 , BIBREF25 , BIBREF26 . This language dependency was easy to see in the CoNLL 2002 and 2003 tasks, in which systems participating in the two available languages for each edition obtained in general different results for each language. This suggests that without fine-tuning for each corpus and language, the systems did not generalize well across languages BIBREF27 .
This paper presents a multilingual and robust NERC system based on simple, general and shallow features that heavily relies on word representation features for high performance. Even though we do not use linguistic motivated features, our approach also works well for inflected languages such as Basque and German. We demonstrate the robustness of our approach by reporting best results for five languages (Basque, Dutch, German, English and Spanish) on 12 different datasets, including seven in-domain and eight out-of-domain evaluations.
Contributions
The main contributions of this paper are the following: First, we show how to easily develop robust NERC systems across datasets and languages with minimal human intervention, even for languages with declension and/or complex morphology. Second, we empirically show how to effectively use various types of simple word representation features thereby providing a clear methodology for choosing and combining them. Third, we demonstrate that our system still obtains very competitive results even when the supervised data is reduced by half (even less in some cases), alleviating the dependency on costly hand annotated data. These three main contributions are based on:
A simple and shallow robust set of features across languages and datasets, even in out-of-domain evaluations.
The lack of linguistic motivated features, even for languages with agglutinative (e.g., Basque) and/or complex morphology (e.g., German).
A clear methodology for using and combining various types of word representation features by leveraging public unlabeled data.
Our approach consists of shallow local features complemented by three types of word representation (clustering) features: Brown clusters BIBREF28 , Clark clusters BIBREF29 and K-means clusters on top of the word vectors obtained by using the Skip-gram algorithm BIBREF30 . We demonstrate that combining and stacking different clustering features induced from various data sources (Reuters, Wikipedia, Gigaword, etc.) allows to cover different and more varied types of named entities without manual feature tuning. Even though our approach is much simpler than most, we obtain the best results for Dutch, Spanish and English and comparable results in German (on CoNLL 2002 and 2003). We also report best results for German using the GermEval 2014 shared task data and for Basque using the Egunkaria testset BIBREF23 .
We report out-of-domain evaluations in three languages (Dutch, English and Spanish) using four different datasets to compare our system with the best publicly available systems for those languages: Illinois NER BIBREF31 for English, Stanford NER BIBREF32 for English and Spanish, SONAR-1 NERD for Dutch BIBREF33 and Freeling for Spanish BIBREF34 . We outperform every other system in the eight out-of-domain evaluations reported in Section SECREF79 . Furthermore, the out-of-domain results show that our clustering features provide a simple and easy method to improve the robustness of NERC systems.
Finally, and inspired by previous work BIBREF35 , BIBREF36 we measure how much supervision is required to obtain state of the art results. In Section SECREF75 we show that we can still obtain very competitive results reducing the supervised data by half (and sometimes even more). This, together with the lack of linguistic features, means that our system considerably saves data annotation costs, which is quite convenient when trying to develop a NERC system for a new language and/or domain.
Our system learns Perceptron models BIBREF37 using the Machine Learning machinery provided by the Apache OpenNLP project with our own customized (local and clustering) features. Our NERC system is publicly available and distributed under the Apache 2.0 License and part of the IXA pipes tools BIBREF38 . Every result reported in this paper is obtained using the conlleval script from the CoNLL 2002 and CoNLL 2003 shared tasks. To guarantee reproducibility of results we also make publicly available the models and the scripts used to perform the evaluations. The system, models and evaluation scripts can be found in the ixa-pipe-nerc website.
Next Section reviews related work, focusing on best performing NERC systems for each language evaluated on standard shared evaluation task data. Section SECREF3 presents the design of our system and our overall approach to NERC. In Section SECREF4 we report the evaluation results obtained by our system for 5 languages (Basque, Dutch, German, English and Spanish) on 12 different datasets, distributed in 7 in-domain and 8 out-of-domain evaluations. Section SECREF5 discusses the results and contributions of our approach. In Section SECREF6 we highlight the main aspects of our work providing some concluding remarks and future work to be done using our NERC approach applied to other text genres, domains and sequence labeling tasks.
Related Work
The Named Entity Recognition and Classification (NERC) task was first defined for the Sixth Message Understanding Conference (MUC 6) BIBREF39 . The MUC 6 tasks focused on Information Extraction (IE) from unstructured text and NERC was deemed to be an important IE sub-task with the aim of recognizing and classifying nominal mentions of persons, organizations and locations, and also numeric expressions of dates, money, percentage and time. In the following years, research on NERC increased as it was considered to be a crucial source of information for other Natural Language Processing tasks such as Question Answering (QA) and Textual Entailment (RTE) BIBREF39 . Furthermore, while MUC 6 was solely devoted to English as target language, the CoNLL shared tasks (2002 and 2003) boosted research on language independent NERC for 3 additional target languages: Dutch, German and Spanish BIBREF40 , BIBREF41 .
The various MUC, ACE and CoNLL evaluations provided a very convenient framework to test and compare NERC systems, algorithms and approaches. They provided manually annotated data for training and testing the systems as well as an objective evaluation methodology. Using such framework, research rapidly evolved from rule-based approaches (consisting of manually handcrafted rules) to language independent systems focused on learning supervised statistical models. Thus, while in the MUC 6 competition 5 out of 8 systems were rule-based, in CoNLL 2003 16 teams participated in the English task all using statistical-based NERC BIBREF39 .
Datasets
Table TABREF10 describes the 12 datasets used in this paper. The first half lists the corpora used for in-domain evaluation whereas the lower half contains the out-of-domain datasets. The CoNLL NER shared tasks focused on language independent machine learning approaches for 4 entity types: person, location, organization and miscellaneous entities. The 2002 edition provided manually annotated data in Dutch and Spanish whereas in 2003 the languages were German and English. In addition to the CoNLL data, for English we also use the formal run of MUC 7 and Wikigold for out-of-domain evaluation. Very detailed descriptions of CoNLL and MUC data can easily be found in the literature, including the shared task descriptions themselves BIBREF42 , BIBREF40 , BIBREF41 , so in the following we will describe the remaining, newer datasets.
The Wikigold corpus consists of 39K words of English Wikipedia manually annotated following the CoNLL 2003 guidelines BIBREF27 . For Spanish and Dutch, we also use Ancora 2.0 BIBREF43 and SONAR-1 BIBREF33 respectively. SONAR-1 is a one million word Dutch corpus with both coarse-grained and fine-grained named entity annotations. The coarse-grained level includes product and event entity types in addition to the four types defined in CoNLL data. Ancora adds date and number types to the CoNLL four main types. In Basque the only gold standard corpus is Egunkaria BIBREF23 . Although the Basque Egunkaria dataset is annotated with four entity types, the miscellaneous class is extremely sparse, occurring only in a proportion of 1 to 10. Thus, in the training data there are 156 entities annotated as MISC whereas each of the other three classes contain around 1200 entities.
In the datasets described so far, named entities were assumed to be non-recursive and non-overlapping. During the annotation process, if a named entity was embedded in a longer one, then only the longest mention was annotated. The exceptions are the GermEval 2014 shared task data for German and MEANTIME, where nested entities are also annotated (both inner and outer spans).
The GermEval 2014 NER shared task BIBREF25 aimed at improving the state of the art of German NERC which was perceived to be comparatively lower than the English NERC. Two main extensions were introduced in GermEval 2014; (i) fine grained named entity sub-types to indicate derivations and compounds; (ii) embedded entities (and not only the longest span) are annotated. In total, there are 12 types for classification: person, location, organization, other plus their sub-types annotated at their inner and outer levels.
Finally, the MEANTIME corpus BIBREF44 is a multilingual (Dutch, English, Italian and Spanish) publicly available evaluation set annotated within the Newsreader project. It consists of 120 documents, divided into 4 topics: Apple Inc., Airbus and Boeing, General Motors, Chrysler and Ford, and the stock market. The articles are selected in such a way that the corpus contains different articles that deal with the same topic over time (e.g. launch of a new product, discussion of the same financial indexes). Moreover, it contains nested entities so the evaluation results will be provided in terms of the outer and the inner spans of the named entities. MEANTIME includes six named entity types: person, location, organization, product, financial and mixed.
Related Approaches
Named entity recognition is a task with a long history in NLP. Therefore, we will summarize those approaches that are most relevant to our work, especially those we will directly compared with in Section SECREF4 . Since CoNLL shared tasks, the most competitive approaches have been supervised systems learning CRF, SVM, Maximum Entropy or Averaged Perceptron models. In any case, while the machine learning method is important, it has also been demonstrated that good performance might largely be due to the feature set used BIBREF45 . Table TABREF13 provides an overview of the features used by previous best scoring approaches for each of the five languages we address in this paper.
Traditionally, local features have included contextual and orthographic information, affixes, character-based features, prediction history, etc. As argued by the CoNLL 2003 organizers, no feature set was deemed to be ideal for NERC BIBREF41 , although many approaches for English refer to BIBREF46 as a useful general approach.
Some of the CoNLL participants use linguistic information (POS, lemmas, chunks, but also specific rules or patterns) for Dutch and English BIBREF47 , BIBREF45 , although these type of features was deemed to be most important for German, for which the use of linguistic features is pervasive BIBREF25 . This is caused by the sparsity caused by the declension cases, the tendency to form compounds containing named entities and by the capitalization of every noun BIBREF24 . For example, the best system among the 11 participants in GermEval 2014, ExB, uses morphological features and specific suffix lists aimed at capturing frequent patterns in the endings of named entities BIBREF48 .
In agglutinative languages such as Basque, which contains declension cases for named entities, linguistic features are considered to be a requirement. For example, the country name `Espainia' (Spain in Basque) can occur in several forms, Espainian, Espainiera, Espainiak, Espainiarentzat, Espainiako, and many more. Linguistic information has been used to treat this phenomenon. The only previous work for Basque developed Eihera, a rule-based NERC system formalized as finite state transducers to take into account declension classes BIBREF23 . The features of Eihera include word, lemma, POS, declension case, capitalized lemma, etc. These features are complemented with gazetteers extracted from the Euskaldunon Egunkaria newspaper and semantic information from the Basque WordNet.
Dictionaries are widely used to inject world knowledge via gazetteer matches as features in machine learning approaches to NERC. The best performing systems carefully compile their own gazetteers from a variety of sources BIBREF47 . BIBREF31 leverage a collection of 30 gazetteers and matches against each one are weighted as a separate feature. In this way they trust each gazetteer to a different degree. BIBREF49 carefully compiled a large collection of English gazetteers extracted from US Census data and Wikipedia and applied them to the process of inducing word embeddings with very good results.
While it is possible to automatically extract them from various corpora or resources, they still require careful manual inspection of the target data. Thus, our approach only uses off the shelf gazetteers whenever they are publicly available. Furthermore, our method collapses every gazetteer into one dictionary. This means that we only add a feature per token, instead of a feature per token and gazetteer.
The intuition behind non-local (or global) features is to treat similarly all occurrences of the same named entity in a text. BIBREF47 proposed a method to produce the set of named entities for the whole sentence, where the optimal set of named entities for the sentence is the coherent set of named entities which maximizes the summation of confidences of the named entities in the set. BIBREF31 developed three types of non-local features, analyzing global dependencies in a window of between 200 and 1000 tokens.
Semi-supervised approaches leveraging unlabeled text had already been applied to improve results in various NLP tasks. More specifically, it had been previously shown how to apply Brown clusters BIBREF28 for Chinese Word Segmentation BIBREF50 , dependency parsing BIBREF35 , NERC BIBREF51 and POS tagging BIBREF36 .
BIBREF31 used Brown clusters as features obtaining what was at the time the best published result of an English NERC system on the CoNLL 2003 testset. BIBREF52 made a rather exhaustive comparison of Brown clusters, Collobert and Weston's embeddings BIBREF53 and HLBL embeddings BIBREF54 to improve chunking and NERC. They show that in some cases the combination of word representation features was positive but, although they used Ratinov and Roth's (2009) system as starting point, they did not manage to improve over the state of the art. Furthermore, they reported that Brown clustering features performed better than the word embeddings.
BIBREF49 extend the Skip-gram algorithm to learn 50-dimensional lexicon infused phrase embeddings from 22 different gazetteers and the Wikipedia. The resulting embeddings are used as features by scaling them by a hyper-parameter which is a real number tuned on the development data. BIBREF49 report best results up to date for English NERC on CoNLL 2003 test data, 90.90 F1.
The best German CoNLL 2003 system (an ensemble) was outperformed by BIBREF24 . They trained the Stanford NER system BIBREF32 , which uses a linear-chain Conditional Random Field (CRF) with a variety of features, including lemma, POS tag, etc. Crucially, they included “distributional similarity” features in the form of Clark clusters BIBREF29 induced from large unlabeled corpora: the Huge German Corpus (HGC) of around 175M tokens of newspaper text and the deWac corpus BIBREF55 consisting of 1.71B tokens of web-crawled data. Using the clusters induced from deWac as a form of semi-supervision improved the results over the best CoNLL 2003 system by 4 points in F1.
The best participant of the English CoNLL 2003 shared task used the results of two externally trained NERC taggers to create an ensemble system BIBREF56 . BIBREF49 develop a stacked linear-chain CRF system: they train two CRFs with roughly the same features; the second CRF can condition on the predictions made by the first CRF. Their “baseline” system uses a similar local featureset as Ratinov and Roth's (2009) but complemented with gazetteers. Their baseline system combined with their phrase embeddings trained with infused lexicons allow them to report the best CoNLL 2003 result so far.
The best system of the GermEval 2014 task built an ensemble of classifiers and pattern extractors to find the most likely tag sequence BIBREF48 . They paid special attention to out of vocabulary words which are addressed by semi-supervised word representation features and an ensemble of POS taggers. Furthermore, remaining unknown candidate mentions are tackled by look-up via the Wikipedia API.
Apart from the feature types, the last two columns of Table TABREF13 refer to whether the systems are publicly available and whether any external resources used for training are made available (e.g., induced word embeddings, gazetteers or corpora). This is desirable to be able to re-train the systems on different datasets. For example, we would have been interested in training the Stanford NER system with the full Ancora corpus for the evaluation presented in Table TABREF85 , but their Spanish cluster lexicon is not available. Alternatively, we would have liked to train our system with the same Ancora partition used to train Stanford NER, but that is not available either.
System Description
The design of ixa-pipe-nerc aims at establishing a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations (POS tags, lemmas, syntax, semantics) and/or cascading errors if automatic language processors are used. The underlying motivation is to obtain robust models to facilitate the development of NERC systems for other languages and datasets/domains while obtaining state of the art results. Our system consists of:
Table TABREF24 provides an example of the features generated by our system.
Local Features
The local features constitute our baseline system on top of which the clustering features are added. We implement the following feature set, partially inspired by previous work BIBREF46 :
Token: Current lowercase token (w), namely, ekuadorko in Table TABREF24 .
Token Shape: Current lowercase token (w) plus current token shape (wc), where token shape consist of: (i) The token is either lowercase or a 2 digit word or a 4 digit word; (ii) If the token contains digits, then whether it also contains letters, or slashes, or hyphens, or commas, or periods or is numeric; (iii) The token is all uppercase letters or is an acronym or is a one letter uppercase word or starts with capital letter. Thus, in Table TABREF24 1994an is a 4 digit word (4d), Ekuadorko has an initial capital shape (ic) and hiriburuan is lowercase (lc).
Previous prediction: the previous outcome (pd) for the current token. The previous predictions in our example are null because these words have not been seen previously, except for the comma.
Sentence: Whether the token is the beginning of the sentence. None of the tokens in our example is at the beginning of the sentence, so this feature is not active in Table TABREF24 .
Prefix: Two prefixes consisting of the first three and four characters of the current token: Eku and Ekua.
Suffix: The four suffixes of length one to four from the last four characters of the current token.
Bigram: Bigrams including the current token and the token shape.
Trigram: Trigrams including the current token and the token shape.
Character n-gram: All lowercase character bigrams, trigrams, fourgrams and fivegrams from the current token (ng).
Token, token shape and previous prediction features are placed in a 5 token window, namely, for these these three features we also consider the previous and the next two words, as shown in Table TABREF24 .
Gazetteers
We add gazetteers to our system only if they are readily available to use, but our approach does not fundamentally depend upon them. We perform a look-up in a gazetteer to check if a named entity occurs in the sentence. The result of the look-up is represented with the same encoding chosen for the training process, namely, the BIO or BILOU scheme. Thus, for the current token we add the following features:
The current named entity class in the encoding schema. Thus, in the BILOU encoding we would have “unit”, “beginning”, “last”, “inside”, or if not match is found, “outside”, combined with the specific named entity type (LOC, ORG, PER, MISC, etc.).
The current named entity class as above and the current token.
Clustering Features
The general idea is that by using some type of semantic similarity or word cluster induced over large unlabeled corpora it is possible to improve the predictions for unseen words in the test set. This type of semi-supervised learning may be aimed at improving performance over a fixed amount of training data or, given a fixed target performance level, to establish how much supervised data is actually required to reach such performance BIBREF35 .
So far the most successful approaches have only used one type of word representation BIBREF49 , BIBREF24 , BIBREF31 . However, our simple baseline combined with one type of word representation features are not able to compete with previous, more complex, systems. Thus, instead of encoding more elaborate features, we have devised a simple method to combine and stack various types of clustering features induced over different data sources or corpora. In principle, our method can be used with any type of word representations. However, for comparison purposes, we decided to use word representations previously used in successful NERC approaches: Brown clusters BIBREF31 , BIBREF52 , Word2vec clusters BIBREF49 and Clark clusters BIBREF32 , BIBREF24 . As can be observed in Table TABREF24 , our clustering features are placed in a 5 token window.
The Brown clustering algorithm BIBREF28 is a hierarchical algorithm which clusters words to maximize the mutual information of bigrams. Thus, it is a class-based bigram model in which:
The probability of a document corresponds to the product of the probabilities of its bigrams,
the probability of each bigram is calculated by multiplying the probability of a bigram model over latent classes by the probability of each class generating the actual word types in the bigram, and
each word type has non-zero probability only on a single class.
The Brown algorithm takes a vocabulary of words to be clustered and a corpus of text containing these words. It starts by assigning each word in the vocabulary to its own separate cluster, then iteratively merges the pair of clusters which leads to the smallest decrease in the likelihood of the text corpus. This produces a hierarchical clustering of the words, which is usually represented as a binary tree, as shown in Figure FIGREF44 . In this tree every word is uniquely identified by its path from the root, and the path can be represented by a bit string. It is also possible to choose different levels of word abstraction by choosing different depths along the path from the root to the word. Therefore, by using paths of various lengths, we obtain clustering features of different granularities BIBREF57 .
We use paths of length 4, 6, 10 and 20 as features BIBREF31 . However, we introduce several novelties in the design of our Brown clustering features:
For each feature which is token-based, we add a feature containing the paths computed for the current token. Thus, taking into account our baseline system, we will add the following Brown clustering features:
Brown Token: existing paths of length 4, 6, 10 and 20 for the current token.
Brown Token Shape: existing paths of length 4, 6, 10, 20 for the current token and current token shape.
Brown Bigram: existing paths of length 4, 6, 10, 20 for bigrams including the current token.
Brown clustering features benefit from two additional features:
Previous prediction plus token: the previous prediction (pd) for the current token and the current token.
Previous two predictions: the previous prediction for the current and the previous token.
For space reasons, Table TABREF24 only shows the Brown Token (bt) and Brown Token Shape (c) features for paths of length 4 and 6. We use the publicly available tool implemented by BIBREF50 with default settings. The input consists of a corpus tokenized and segmented one sentence per line, without punctuation. Furthermore, we follow previous work and remove all sentences which consist of less than 90% lowercase characters BIBREF50 , BIBREF52 before inducing the Brown clusters.
BIBREF29 presents a number of unsupervised algorithms, based on distributional and morphological information, for clustering words into classes from unlabeled text. The focus is on clustering infrequent words on a small numbers of clusters from comparatively small amounts of data. In particular, BIBREF29 presents an algorithm combining distributional information with morphological information of words “by composing the Ney-Essen clustering model with a model for the morphology within a Bayesian framework”. The objective is to bias the distributional information to put words that are morphologically similar in the same cluster. We use the code released by BIBREF29 off the shelf to induce Clark clusters using the Ney-Essen with morphological information method. The input of the algorithm is a sequence of lowercase tokens without punctuation, one token per line with sentence breaks.
Our Clark clustering features are very simple: we perform a look-up of the current token in the clustering lexicon. If a match is found, we add as a feature the clustering class, or the lack of match if the token is not found (see Clark-a and Clark-b in Table TABREF24 ).
Another family of language models that produces word representations are the neural language models. These approaches produce representation of words as continuous vectors BIBREF53 , BIBREF54 , also called word embeddings. Nowadays, perhaps the most popular among them is the Skip-gram algorithm BIBREF30 . The Skip-gram algorithm uses shallow log-linear models to compute vector representation of words which are more efficient than previous word representations induced on neural language models. Their objective is to produce word embeddings by computing the probability of each n-gram as the product of the conditional probabilities of each context word in the n-gram conditioned on its central word BIBREF30 .
Instead of using continuous vectors as real numbers, we induce clusters or word classes from the word vectors by applying K-means clustering. In this way we can use the cluster classes as simple binary features by injecting unigram match features. We use the Word2vec tool released by BIBREF30 with a 5 window context to train 50-dimensional word embeddings and to obtain the word clusters on top of them. The input of the algorithm is a corpus tokenized, lowercased, with punctuation removed and in one line. The Word2vec features are implemented exactly like the Clark features.
We successfully combine clustering features from different word representations. Furthermore, we also stack or accumulate features of the same type of word representation induced from different data sources, trusting each clustering lexicon to a different degree, as shown by the five encoded clustering features in Table TABREF24 : two Clark and Word2vec features from different source data and one Brown feature. When using word representations as semi-supervised features for a task like NERC, two principal factors need to be taken into account: (i) the source data or corpus used to induce the word representations and (ii) the actual word representation used to encode our features which in turn modify the weight of our model's parameters in the training process.
For the clustering features to be effective the induced clusters need to contain as many words appearing in the training, development and test sets as possible. This can be achieved by using corpora closely related to the text genre or domain of the data sets or by using very large unlabeled corpora which, although not closely domain-related, be large enough to include many relevant words. For example, with respect to the CoNLL 2003 English dataset an example of the former would be the Reuters corpus while the Wikipedia would be an example of the latter.
The word representations obtained by different algorithms would capture different distributional properties of words in a given corpus or data source. Therefore, each type of clustering would allow us to capture different types of occurring named entity types. In other words, combining and stacking different types of clustering features induced over a variety of data sources should help to capture more similarities between different words in the training and test sets, increasing the contribution to the weights of the model parameters in the training process.
Experimental Results
In this Section we report on the experiments performed with the ixa-pipe-nerc system as described in the previous section. The experiments are performed in 5 languages: Basque, Dutch, English, German and Spanish. For comparison purposes, in-domain results are presented in Section SECREF61 using the most common NERC datasets for each language as summarized in Table TABREF10 . Section SECREF75 analyzes the performance when reducing training data and Section SECREF79 presents eight out-of-domain evaluations for three languages: Dutch, English and Spanish.
The results for Dutch, English and Spanish do not include trigrams and character n-grams in the local featureset described in Section SECREF25 , except for the models in each in-domain evaluation which are marked with “charngram 1:6”.
We also experiment with dictionary features but, in contrast to previous approaches such as BIBREF49 , we only use currently available gazetteers off-the-shelf. For every model marked with “dict” we use the thirty English Illinois NER gazetteers BIBREF31 , irrespective of the target language. Additionally, the English models use six gazetteers about the Global Automotive Industry provided by LexisNexis to the Newsreader project, whereas the German models include, in addition to the Illinois gazetteers, the German dictionaries distributed in the CoNLL 2003 shared task. The gazetteers are collapsed into one large dictionary and deployed as described in Section SECREF35 .
Finally, the clustering features are obtained by processing the following clusters from publicly available corpora: (i) 1000 Brown clusters; (ii) Clark and Word2vec clusters in the 100-600 range. To choose the best combination of clustering features we test the available permutations of Clark and Word2vec clusters with and without the Brown clusters on the development data. Table TABREF58 provides details of every corpus used to induce the clusters. For example, the first row reads: “Reuters RCV1 was used; the original 63 million words were reduced to 35 million after pre-processing for inducing Brown clusters. Clark and Word2vec clusters were trained on the whole corpus”. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF38 .
Every evaluation is carried out using the CoNLL NER evaluation script. The results are obtained with the BILOU encoding for every experimental setting except for German CoNLL 2003.
In-domain evaluation
In this section the results are presented by language. In two cases, Dutch and German, we use two different datasets, making it a total of seven in-domain evaluations.
We tested our system in the highly competitive CoNLL 2003 dataset. Table TABREF63 shows that three of our models outperform previous best results reported for English in the CoNLL 2003 dataset BIBREF49 . Note that the best F1 score (91.36) is obtained by adding trigrams and character n-gram features to the best model (91.18). The results also show that these models improve the baseline provided by the local features by around 7 points in F1 score. The most significant gain is in terms of recall, almost 9 points better than the baseline.
We also report very competitive results, only marginally lower than BIBREF49 , based on the stacking and combination of clustering features as described in Section UID57 . Thus, both best cluster and comp models, based on local plus clustering features only, outperform very competitive and more complex systems such as those of BIBREF31 and BIBREF52 , and obtain only marginally lower results than BIBREF49 . The stacking and combining effect manifests itself very clearly when we compare the single clustering feature models (BR, CW600, W2VG200 and W2VW400) with the light, comp and best cluster models which improve the overall F1 score by 1.30, 1.72 and 1.85 respectively over the best single clustering model (CW600).
It is worth mentioning that our models do not score best in the development data. As the development data is closer in style and genre to the training data BIBREF31 , this may suggest that our system generalizes better on test data that is not close to the training data; indeed, the results reported in Section SECREF79 seem to confirm this hypothesis.
We also compared our results with respect to the best two publicly available English NER systems trained on the same data. We downloaded the Stanford NER system distributed in the 2015-01-30 package. We evaluated their CoNLL model and, while the result is substantially better than their reference paper BIBREF32 , our clustering models obtain better results. The Illinois NER tagger is used by BIBREF31 and BIBREF52 , both of which are outperformed by our system.
We tested our system in the GermEval 2014 dataset. Table TABREF65 compares our results with the best two systems (ExB and UKP) by means of the M3 metric, which separately analyzes the performance in terms of the outer and inner named entity spans. Table TABREF65 makes explicit the significant improvements achieved by the clustering features on top of the baseline system, particularly in terms of recall (almost 11 points in the outer level). The official results of our best configuration (de-cluster-dict) are reported in Table TABREF66 showing that our system marginally improves the best systems' results on that task (ExB and UKP).
We also compare our system, in the last three rows, with the publicly available GermaNER BIBREF26 , which reports results for the 4 main outer level entity types (person, location, organization and other). For this experiment we trained the de-cluster and de-cluster + dict models on the four main classes, improving GermaNER's results by almost 3 F1 points. The GermaNER method of evaluation is interesting because allows researchers to directly compare their systems with a publicly available system trained on GermEval data.
Table TABREF67 compares our German CoNLL 2003 results with the best previous work trained on public data. Our best CoNLL 2003 model obtains results similar to the state of the art performance with respect to the best system published up to date BIBREF24 using public data.
BIBREF24 also report 78.20 F1 with a model trained with Clark clusters induced using the Huge German Corpus (HGC). Unfortunately, the corpus or the induced clusters were not available.
The best system up to date on the CoNLL 2002 dataset, originally published by BIBREF47 , is distributed as part of the Freeling library BIBREF34 . Table TABREF69 lists four models that improve over their reported results, almost by 3 points in F1 measure in the case of the es-cluster model (with our without trigram and character n-gram features).
Despite using clusters from one data source only (see Table TABREF58 ), results in Table TABREF71 show that our nl-cluster model outperforms the best result published on CoNLL 2002 BIBREF45 by 3.83 points in F1 score. Adding the English Illinois NER gazetteers BIBREF31 and trigram and character n-gram features increases the score to 85.04 F1, 5.41 points better than previous published work on this dataset.
We also compared our system with the more recently developed SONAR-1 corpus and the companion NERD system distributed inside its release BIBREF33 . They report 84.91 F1 for the six main named entity types via 10-fold cross validation. For this comparison we chose the local, nl-cluster and nl-cluster-dict configurations from Table TABREF71 and run them on SONAR-1 using the same settings. The results reported in Table TABREF72 shows our system's improvement over previous results on this dataset.
Table TABREF74 reports on the experiments using the Egunkaria NER dataset provided by BIBREF23 . Due to the sparsity of the MISC class mentioned in Section SECREF9 , we decided to train our models on three classes only (location, organization and person). Thus, the results are obtained training our models in the customary manner and evaluating on 3 classes. However, for direct comparison with previous work BIBREF23 , we also evaluate our best eu-cluster model (trained on 3 classes) on 4 classes.
The results show that our eu-cluster model clearly improves upon previous work by 4 points in F1 measure (75.40 vs 71.35). These results are particularly interesting as it had been so far assumed that complex linguistic features and language-specific rules were required to perform well for agglutinative languages such as Basque BIBREF23 . Finally, it is worth noting that the eu-cluster model increases the overall F1 score by 11.72 over the baseline, of which 10 points are gained in precision and 13 in terms of recall.
Reducing training data
So far, we have seen how, given a fixed amount of supervised training data, leveraging unlabeled data using multiple cluster sources helped to obtain state of the art results in seven different in-domain settings for five languages. In this section we will investigate to what extent our system allows to reduce the dependency on supervised training data.
We first use the English CoNLL 2003 dataset for this experiment. The training set consists of around 204K words and we use various smaller versions of it to test the performance of our best cluster model reported in Table TABREF63 . Table TABREF76 displays the F1 results of the baseline system consisting of local features and the best cluster model. The INLINEFORM0 column refers to the gains of our best cluster model with respect to the baseline model for every portion of the training set.
While we have already commented the substantial gains obtained simply by adding our clustering features, it is also interesting to note that the gains are much substantial when less supervised training data is available. Furthermore, it is striking that training our clustering features using only one eight of the training data (30K words) allows to obtain similar performance to the baseline system trained on the full training set. Equally interesting is the fact that cutting by half the training data only marginally harms the overall performance. Finally, training on just a quarter of the training set (60K) results in a very competitive model when compared with other publicly available NER systems for English trained on the full training set: it roughly matches Stanford NER's performance, it outperforms models using external knowledge or non-local features reported by BIBREF31 , and also several models reported by BIBREF52 , which use one type of word representations on top of the baseline system.
We have also re-trained the Illinois NER system BIBREF31 and our best CoNLL 2003 model (en-91-18) for comparison. First, we can observe that for every portion of the training set, both our best cluster and en-91-18 model outperform the Illinois NER system. The best cluster results are noteworthy because, as opposed to Illinois NER, it does not use gazetteers or global features for extra performance.
These results are mirrored by those obtained for the rest of the languages and datasets. Thus, Table TABREF77 displays, for each language, the F1 results of the baseline system and of the best cluster models on top of the baseline. Overall, it confirms that our cluster-based models obtain state of the art results using just one half of the data. Furthermore, using just one quarter of the training data we are able to match results of other publicly available systems for every language, outperforming in some cases, such as Basque, much complex systems of classifiers exploiting linguistic specific rules and features (POS tags, lemmas, semantic information from WordNet, etc.). Considering that Basque is a low-resourced language, it is particularly relevant to be able to reduce as much as possible the amount of gold supervised data required to develop a competitive NERC system.
Out-of-domain evaluations
NERC systems are often used in out-of-domain settings, namely, to annotate data that greatly differs from the data from which the NERC models were learned. These differences can be of text genre and/or domain, but also because the assumptions of what constitutes a named entity might differ. It is therefore interesting to develop robust NERC systems across both domains and datasets. In this section we demonstrate that our approach, consisting of basic, general local features and the combination and stacking of clusters, produces robust NERC systems in three out-of-domain evaluation settings:
Class disagreements: Named entities are assigned to different classes in training and test.
Different text genre: The text genre of training and test data differs.
Annotation guidelines: The gold annotation of the test data follows different guidelines from the training data. This is usually reflected in different named entity spans.
The datasets and languages chosen for these experiments are based on the availability of both previous results and publicly distributed NERC systems to facilitate direct comparison of our system with other approaches. Table TABREF83 specifies the datasets used for each out-of-domain setting and language. Details of each dataset can be found Table TABREF10 .
MUC 7 annotates seven entity types, including four that are not included in CoNLL data: DATE, MONEY, NUMBER and TIME entities. Furthermore, CoNLL includes the MISC class, which was absent in MUC 7. This means that there are class disagreements in the gold standard annotation between the training and test datasets. In addition to the four CoNLL classes, SONAR-1 includes PRODUCT and EVENT whereas Ancora also annotates DATE and NUMBER. For example, consider the following sentence of the MUC 7 gold standard (example taken from BIBREF31 ):
“...baloon, called the Virgin Global Challenger.”
The gold annotation in MUC 7 establishes that there is one named entity:
“...baloon, called [ORG Virgin] Global Challenger.”
However, according to CoNLL 2003 guidelines, the entire name should be annotated like MISC:
“...baloon, called [MISC Virgin Global Challenger].”
In this setting some adjustments are made to the NERC systems' output. Following previous work BIBREF31 , every named entity that is not LOC, ORG, PER or MISC is labeled as `O'. Additionally for MUC 7 every MISC named entity is changed to `O'. For English we used the models reported in Section UID62 . For Spanish and Dutch we trained our system with the Ancora and SONAR-1 corpora using the configurations described in Sections UID68 and UID70 respectively. Table TABREF85 compares our results with previous approaches: using MUC 7, BIBREF52 provide standard phrase results whereas BIBREF31 score token based F1 results, namely, each token is considered a chunk, instead of considering multi-token spans too. For Spanish we use the Stanford NER Spanish model (2015-01-30 version) trained with Ancora. For Dutch we compare our SONAR-1 system with the companion system distributed with the SONAR-1 corpus BIBREF33 . The results are summarized in Table TABREF85 .
In this setting the out-of-domain character is given by the differences in text genre between the English CoNLL 2003 set and the Wikigold corpus. We compare our system with English models trained on large amounts of silver-standard text (3.5M tokens) automatically created from the Wikipedia BIBREF27 . They report results on Wikigold showing that they outperformed their own CoNLL 2003 gold-standard model by 10 points in F1 score. We compare their result with our best cluster model in Table TABREF87 . While the results of our baseline model confirms theirs, our clustering model score is slightly higher. This result is interesting because it is arguably more simple to induce the clusters we use to train ixa-pipe-nerc rather than create the silver standard training set from Wikipedia as described in BIBREF27 .
In this section the objective is studying not so much the differences in textual genre as the influence of substantially different annotation standards. We only use three classes (location, organization and person) to evaluate the best models presented for in-domain evaluations labeling `O' every entity which is not LOC, ORG or PER.
The text genre of MEANTIME is not that different from CoNLL data. However, differences in the gold standard annotation result in significant disagreements regarding the span of the named entities BIBREF59 . For example, the following issues are markedly different with respect to the training data we use for each language:
Different criteria to decide when a named entity is annotated: in the expression “40 billion US air tanker contract” the MEANTIME gold standard does not mark `US' as location, whereas in the training data this is systematically annotated.
Mentions including the definite article within the name entity span: `the United States' versus `United States'.
Longer extents containing common nouns: in the MEANTIME corpus there are many entities such as “United States airframer Boeing”, which in this case is considered an organization, whereas in the training data this span will in general consists of two entities: `United States' as location and `Boeing' as organization.
Common nouns modifying the proper name: `Spokeswoman Sandy Angers' is annotated as a named entity of type PER whereas in the training data used the span of the named entity would usually be `Sandy Angers'.
CoNLL NER phrase based evaluation punishes any bracketing error as both false positive and negative. Thus, these span-related disagreements make this setting extremely hard for models trained according to other annotation guidelines, as shown by Table TABREF93 . Our baseline models degrade around 40 F1 points and the cluster-based models around 35. Other systems' results worsen much more, especially for Spanish and Dutch. The token-based scores are in general better but the proportion in performance between systems across languages is similar.
As an additional experiment, we also tested the English model recommended by Stanford NER which is trained for three classes (LOC, PER, ORG) using a variety of public and (not identified) private corpora (referred to as Stanford NER 3 class (ALL) in Table TABREF94 ). The results with respect to their CoNLL model improved by around 3 points in F1 score across named entity labels and evaluation types (phrase or token based). In view of these results, we experimented with multi-corpora training data added to our best CoNLL 2003 model (en-91-18). Thus, we trained using three public training sets: MUC 7, CoNLL 2003 and Ontonotes 4.0. The local model with the three training sets (Local ALL) improved 12 and 17 points in F1 score across evaluations and entity types, outperforming our best model trained only with CoNLL 2003. Adding the clustering features gained between 2 and 5 points more surpassing the Stanford NER 3 class multi-corpora model in every evaluation. We believe that the main reason to explain these improvements is the variety and quantity of annotations provided by Ontonotes (1M word corpus), and to a lesser extent by MUC 7, which includes some spans containing common nouns and determiners making the model slightly more robust regarding the mention spans.
Discussion
Despite the simplicity of the ixa-pipe-nerc approach, we report best results for English in 4 different datasets: for CoNLL 2003 and for the three English out-of-domain evaluations. For German we improve the results of the best system in the GermEval 2014 task and obtain comparable results to previous work in the CoNLL 2003 dataset using publicly available data. In Spanish we provide results on CoNLL 2002 and in two out-of-domain evaluations clearly outperforming previous best results. For Dutch we improve over previous results in CoNLL 2002 and SONAR-1 data and two out-of-domain evaluations. Finally, for Basque (Egunkaria) the improvements are considerable.
Conclusion and Future Work
We have shown how to develop robust NERC systems across languages and datasets with minimal human intervention, even for languages with inflected named entities. This is based on adequately combining word representation features on top of shallow and general local features. Crucially, we have empirically demonstrate how to effectively combine various types of simple word representation features depending on the source data available. This has resulted in a clear methodology for using the three types of clustering features which produces very competitive results in both in-domain and out-of-domain settings.
Thus, despite the relative simplicity of our approach, we report state of the art results for Dutch, English, German, Spanish and Basque in seven in-domain evaluations.
We also outperform previous work in eight out-of-domain evaluations, showing that our clustering features improve the robustness of NERC systems across datasets. Finally, we have measured how much our system's performance degrades when the amount of supervised data is drastically cut. The results show our models are still very competitive even when reducing the supervised data by half or more. This, together with the lack of linguistic features, facilitates the easy and fast development of NERC systems for new domains or languages.
In future work we would like to explore more the various types of domain adaptation required for robust performance across text genres and domains, perhaps including micro-blog and noisy text such as tweets. Furthermore, we are also planning to adapt our techniques to other sequence labeling problems such as Opinion Target Extraction BIBREF13 , BIBREF14 and Super Sense tagging BIBREF60 .
Acknowledgments
We would like to thank the anonymous reviewers for their comments to improve this paper. We would also like to thank Sebastian Padó for his help training the Clark clusters. This work has been supported by the European projects NewsReader, EC/FP7/316404 and QTLeap - EC/FP7/610516, and by the Spanish Ministry for Science and Innovation (MICINN) SKATER, Grant No. TIN2012-38584-C06-01 and TUNER, TIN2015-65308-C5-1-R. | Precision, Recall, F1 |
a4fe5d182ddee24e5bbf222d6d6996b3925060c8 | a4fe5d182ddee24e5bbf222d6d6996b3925060c8_0 | Q: which datasets were used in evaluation?
Text: Introduction
A named entity can be mentioned using a great variety of surface forms (Barack Obama, President Obama, Mr. Obama, B. Obama, etc.) and the same surface form can refer to a variety of named entities. For example, according to the English Wikipedia, the form `Europe' can ambiguously be used to refer to 18 different entities, including the continent, the European Union, various Greek mythological entities, a rock band, some music albums, a magazine, a short story, etc. Furthermore, it is possible to refer to a named entity by means of anaphoric pronouns and co-referent expressions such as `he', `her', `their', `I', `the 35 year old', etc. Therefore, in order to provide an adequate and comprehensive account of named entities in text it is necessary to recognize the mention of a named entity and to classify it by a pre-defined type (e.g, person, location, organization). Named Entity Recognition and Classification (NERC) is usually a required step to perform Named Entity Disambiguation (NED), namely to link `Europe' to the right Wikipedia article, and to resolve every form of mentioning or co-referring to the same entity.
Nowadays NERC systems are widely being used in research for tasks such as Coreference Resolution BIBREF0 , Named Entity Disambiguation BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 for which a lot of interest has been created by the TAC KBP shared tasks BIBREF6 , Machine Translation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , Aspect Based Sentiment Analysis BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , Event Extraction BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 and Event Ordering BIBREF20 .
Moreover, NERC systems are integrated in the processing chain of many industrial software applications, mostly by companies offering specific solutions for a particular industrial sector which require recognizing named entities specific of their domain. There is therefore a clear interest in both academic research and industry to develop robust and efficient NERC systems: For industrial vendors it is particularly important to diversify their services by including NLP technology for a variety of languages whereas in academic research NERC is one of the foundations of many other NLP end-tasks.
Most NERC taggers are supervised statistical systems that extract patterns and term features which are considered to be indications of Named Entity (NE) types using the manually annotated training data (extracting orthographic, linguistic and other types of evidence) and often external knowledge resources. As in other NLP tasks, supervised statistical NERC systems are more robust and obtain better performance on available evaluation sets, although sometimes the statistical models can also be combined with specific rules for some NE types. For best performance, supervised statistical approaches require manually annotated training data, which is both expensive and time-consuming. This has seriously hindered the development of robust high performing NERC systems for many languages but also for other domains and text genres BIBREF21 , BIBREF22 , in what we will henceforth call `out-of-domain' evaluations.
Moreover, supervised NERC systems often require fine-tuning for each language and, as some of the features require language-specific knowledge, this poses yet an extra complication for the development of robust multilingual NERC systems. For example, it is well-known that in German every noun is capitalized and that compounds including named entities are pervasive. This also applies to agglutinative languages such as Basque, Korean, Finnish, Japanese, Hungarian or Turkish. For this type of languages, it had usually been assumed that linguistic features (typically Part of Speech (POS) and lemmas, but also semantic features based on WordNet, for example) and perhaps specific hand-crafted rules, were a necessary condition for good NERC performance as they would allow to capture better the most recurrent declensions (cases) of named entities for Basque BIBREF23 or to address problems such as sparsity and capitalization of every noun for German BIBREF24 , BIBREF25 , BIBREF26 . This language dependency was easy to see in the CoNLL 2002 and 2003 tasks, in which systems participating in the two available languages for each edition obtained in general different results for each language. This suggests that without fine-tuning for each corpus and language, the systems did not generalize well across languages BIBREF27 .
This paper presents a multilingual and robust NERC system based on simple, general and shallow features that heavily relies on word representation features for high performance. Even though we do not use linguistic motivated features, our approach also works well for inflected languages such as Basque and German. We demonstrate the robustness of our approach by reporting best results for five languages (Basque, Dutch, German, English and Spanish) on 12 different datasets, including seven in-domain and eight out-of-domain evaluations.
Contributions
The main contributions of this paper are the following: First, we show how to easily develop robust NERC systems across datasets and languages with minimal human intervention, even for languages with declension and/or complex morphology. Second, we empirically show how to effectively use various types of simple word representation features thereby providing a clear methodology for choosing and combining them. Third, we demonstrate that our system still obtains very competitive results even when the supervised data is reduced by half (even less in some cases), alleviating the dependency on costly hand annotated data. These three main contributions are based on:
A simple and shallow robust set of features across languages and datasets, even in out-of-domain evaluations.
The lack of linguistic motivated features, even for languages with agglutinative (e.g., Basque) and/or complex morphology (e.g., German).
A clear methodology for using and combining various types of word representation features by leveraging public unlabeled data.
Our approach consists of shallow local features complemented by three types of word representation (clustering) features: Brown clusters BIBREF28 , Clark clusters BIBREF29 and K-means clusters on top of the word vectors obtained by using the Skip-gram algorithm BIBREF30 . We demonstrate that combining and stacking different clustering features induced from various data sources (Reuters, Wikipedia, Gigaword, etc.) allows to cover different and more varied types of named entities without manual feature tuning. Even though our approach is much simpler than most, we obtain the best results for Dutch, Spanish and English and comparable results in German (on CoNLL 2002 and 2003). We also report best results for German using the GermEval 2014 shared task data and for Basque using the Egunkaria testset BIBREF23 .
We report out-of-domain evaluations in three languages (Dutch, English and Spanish) using four different datasets to compare our system with the best publicly available systems for those languages: Illinois NER BIBREF31 for English, Stanford NER BIBREF32 for English and Spanish, SONAR-1 NERD for Dutch BIBREF33 and Freeling for Spanish BIBREF34 . We outperform every other system in the eight out-of-domain evaluations reported in Section SECREF79 . Furthermore, the out-of-domain results show that our clustering features provide a simple and easy method to improve the robustness of NERC systems.
Finally, and inspired by previous work BIBREF35 , BIBREF36 we measure how much supervision is required to obtain state of the art results. In Section SECREF75 we show that we can still obtain very competitive results reducing the supervised data by half (and sometimes even more). This, together with the lack of linguistic features, means that our system considerably saves data annotation costs, which is quite convenient when trying to develop a NERC system for a new language and/or domain.
Our system learns Perceptron models BIBREF37 using the Machine Learning machinery provided by the Apache OpenNLP project with our own customized (local and clustering) features. Our NERC system is publicly available and distributed under the Apache 2.0 License and part of the IXA pipes tools BIBREF38 . Every result reported in this paper is obtained using the conlleval script from the CoNLL 2002 and CoNLL 2003 shared tasks. To guarantee reproducibility of results we also make publicly available the models and the scripts used to perform the evaluations. The system, models and evaluation scripts can be found in the ixa-pipe-nerc website.
Next Section reviews related work, focusing on best performing NERC systems for each language evaluated on standard shared evaluation task data. Section SECREF3 presents the design of our system and our overall approach to NERC. In Section SECREF4 we report the evaluation results obtained by our system for 5 languages (Basque, Dutch, German, English and Spanish) on 12 different datasets, distributed in 7 in-domain and 8 out-of-domain evaluations. Section SECREF5 discusses the results and contributions of our approach. In Section SECREF6 we highlight the main aspects of our work providing some concluding remarks and future work to be done using our NERC approach applied to other text genres, domains and sequence labeling tasks.
Related Work
The Named Entity Recognition and Classification (NERC) task was first defined for the Sixth Message Understanding Conference (MUC 6) BIBREF39 . The MUC 6 tasks focused on Information Extraction (IE) from unstructured text and NERC was deemed to be an important IE sub-task with the aim of recognizing and classifying nominal mentions of persons, organizations and locations, and also numeric expressions of dates, money, percentage and time. In the following years, research on NERC increased as it was considered to be a crucial source of information for other Natural Language Processing tasks such as Question Answering (QA) and Textual Entailment (RTE) BIBREF39 . Furthermore, while MUC 6 was solely devoted to English as target language, the CoNLL shared tasks (2002 and 2003) boosted research on language independent NERC for 3 additional target languages: Dutch, German and Spanish BIBREF40 , BIBREF41 .
The various MUC, ACE and CoNLL evaluations provided a very convenient framework to test and compare NERC systems, algorithms and approaches. They provided manually annotated data for training and testing the systems as well as an objective evaluation methodology. Using such framework, research rapidly evolved from rule-based approaches (consisting of manually handcrafted rules) to language independent systems focused on learning supervised statistical models. Thus, while in the MUC 6 competition 5 out of 8 systems were rule-based, in CoNLL 2003 16 teams participated in the English task all using statistical-based NERC BIBREF39 .
Datasets
Table TABREF10 describes the 12 datasets used in this paper. The first half lists the corpora used for in-domain evaluation whereas the lower half contains the out-of-domain datasets. The CoNLL NER shared tasks focused on language independent machine learning approaches for 4 entity types: person, location, organization and miscellaneous entities. The 2002 edition provided manually annotated data in Dutch and Spanish whereas in 2003 the languages were German and English. In addition to the CoNLL data, for English we also use the formal run of MUC 7 and Wikigold for out-of-domain evaluation. Very detailed descriptions of CoNLL and MUC data can easily be found in the literature, including the shared task descriptions themselves BIBREF42 , BIBREF40 , BIBREF41 , so in the following we will describe the remaining, newer datasets.
The Wikigold corpus consists of 39K words of English Wikipedia manually annotated following the CoNLL 2003 guidelines BIBREF27 . For Spanish and Dutch, we also use Ancora 2.0 BIBREF43 and SONAR-1 BIBREF33 respectively. SONAR-1 is a one million word Dutch corpus with both coarse-grained and fine-grained named entity annotations. The coarse-grained level includes product and event entity types in addition to the four types defined in CoNLL data. Ancora adds date and number types to the CoNLL four main types. In Basque the only gold standard corpus is Egunkaria BIBREF23 . Although the Basque Egunkaria dataset is annotated with four entity types, the miscellaneous class is extremely sparse, occurring only in a proportion of 1 to 10. Thus, in the training data there are 156 entities annotated as MISC whereas each of the other three classes contain around 1200 entities.
In the datasets described so far, named entities were assumed to be non-recursive and non-overlapping. During the annotation process, if a named entity was embedded in a longer one, then only the longest mention was annotated. The exceptions are the GermEval 2014 shared task data for German and MEANTIME, where nested entities are also annotated (both inner and outer spans).
The GermEval 2014 NER shared task BIBREF25 aimed at improving the state of the art of German NERC which was perceived to be comparatively lower than the English NERC. Two main extensions were introduced in GermEval 2014; (i) fine grained named entity sub-types to indicate derivations and compounds; (ii) embedded entities (and not only the longest span) are annotated. In total, there are 12 types for classification: person, location, organization, other plus their sub-types annotated at their inner and outer levels.
Finally, the MEANTIME corpus BIBREF44 is a multilingual (Dutch, English, Italian and Spanish) publicly available evaluation set annotated within the Newsreader project. It consists of 120 documents, divided into 4 topics: Apple Inc., Airbus and Boeing, General Motors, Chrysler and Ford, and the stock market. The articles are selected in such a way that the corpus contains different articles that deal with the same topic over time (e.g. launch of a new product, discussion of the same financial indexes). Moreover, it contains nested entities so the evaluation results will be provided in terms of the outer and the inner spans of the named entities. MEANTIME includes six named entity types: person, location, organization, product, financial and mixed.
Related Approaches
Named entity recognition is a task with a long history in NLP. Therefore, we will summarize those approaches that are most relevant to our work, especially those we will directly compared with in Section SECREF4 . Since CoNLL shared tasks, the most competitive approaches have been supervised systems learning CRF, SVM, Maximum Entropy or Averaged Perceptron models. In any case, while the machine learning method is important, it has also been demonstrated that good performance might largely be due to the feature set used BIBREF45 . Table TABREF13 provides an overview of the features used by previous best scoring approaches for each of the five languages we address in this paper.
Traditionally, local features have included contextual and orthographic information, affixes, character-based features, prediction history, etc. As argued by the CoNLL 2003 organizers, no feature set was deemed to be ideal for NERC BIBREF41 , although many approaches for English refer to BIBREF46 as a useful general approach.
Some of the CoNLL participants use linguistic information (POS, lemmas, chunks, but also specific rules or patterns) for Dutch and English BIBREF47 , BIBREF45 , although these type of features was deemed to be most important for German, for which the use of linguistic features is pervasive BIBREF25 . This is caused by the sparsity caused by the declension cases, the tendency to form compounds containing named entities and by the capitalization of every noun BIBREF24 . For example, the best system among the 11 participants in GermEval 2014, ExB, uses morphological features and specific suffix lists aimed at capturing frequent patterns in the endings of named entities BIBREF48 .
In agglutinative languages such as Basque, which contains declension cases for named entities, linguistic features are considered to be a requirement. For example, the country name `Espainia' (Spain in Basque) can occur in several forms, Espainian, Espainiera, Espainiak, Espainiarentzat, Espainiako, and many more. Linguistic information has been used to treat this phenomenon. The only previous work for Basque developed Eihera, a rule-based NERC system formalized as finite state transducers to take into account declension classes BIBREF23 . The features of Eihera include word, lemma, POS, declension case, capitalized lemma, etc. These features are complemented with gazetteers extracted from the Euskaldunon Egunkaria newspaper and semantic information from the Basque WordNet.
Dictionaries are widely used to inject world knowledge via gazetteer matches as features in machine learning approaches to NERC. The best performing systems carefully compile their own gazetteers from a variety of sources BIBREF47 . BIBREF31 leverage a collection of 30 gazetteers and matches against each one are weighted as a separate feature. In this way they trust each gazetteer to a different degree. BIBREF49 carefully compiled a large collection of English gazetteers extracted from US Census data and Wikipedia and applied them to the process of inducing word embeddings with very good results.
While it is possible to automatically extract them from various corpora or resources, they still require careful manual inspection of the target data. Thus, our approach only uses off the shelf gazetteers whenever they are publicly available. Furthermore, our method collapses every gazetteer into one dictionary. This means that we only add a feature per token, instead of a feature per token and gazetteer.
The intuition behind non-local (or global) features is to treat similarly all occurrences of the same named entity in a text. BIBREF47 proposed a method to produce the set of named entities for the whole sentence, where the optimal set of named entities for the sentence is the coherent set of named entities which maximizes the summation of confidences of the named entities in the set. BIBREF31 developed three types of non-local features, analyzing global dependencies in a window of between 200 and 1000 tokens.
Semi-supervised approaches leveraging unlabeled text had already been applied to improve results in various NLP tasks. More specifically, it had been previously shown how to apply Brown clusters BIBREF28 for Chinese Word Segmentation BIBREF50 , dependency parsing BIBREF35 , NERC BIBREF51 and POS tagging BIBREF36 .
BIBREF31 used Brown clusters as features obtaining what was at the time the best published result of an English NERC system on the CoNLL 2003 testset. BIBREF52 made a rather exhaustive comparison of Brown clusters, Collobert and Weston's embeddings BIBREF53 and HLBL embeddings BIBREF54 to improve chunking and NERC. They show that in some cases the combination of word representation features was positive but, although they used Ratinov and Roth's (2009) system as starting point, they did not manage to improve over the state of the art. Furthermore, they reported that Brown clustering features performed better than the word embeddings.
BIBREF49 extend the Skip-gram algorithm to learn 50-dimensional lexicon infused phrase embeddings from 22 different gazetteers and the Wikipedia. The resulting embeddings are used as features by scaling them by a hyper-parameter which is a real number tuned on the development data. BIBREF49 report best results up to date for English NERC on CoNLL 2003 test data, 90.90 F1.
The best German CoNLL 2003 system (an ensemble) was outperformed by BIBREF24 . They trained the Stanford NER system BIBREF32 , which uses a linear-chain Conditional Random Field (CRF) with a variety of features, including lemma, POS tag, etc. Crucially, they included “distributional similarity” features in the form of Clark clusters BIBREF29 induced from large unlabeled corpora: the Huge German Corpus (HGC) of around 175M tokens of newspaper text and the deWac corpus BIBREF55 consisting of 1.71B tokens of web-crawled data. Using the clusters induced from deWac as a form of semi-supervision improved the results over the best CoNLL 2003 system by 4 points in F1.
The best participant of the English CoNLL 2003 shared task used the results of two externally trained NERC taggers to create an ensemble system BIBREF56 . BIBREF49 develop a stacked linear-chain CRF system: they train two CRFs with roughly the same features; the second CRF can condition on the predictions made by the first CRF. Their “baseline” system uses a similar local featureset as Ratinov and Roth's (2009) but complemented with gazetteers. Their baseline system combined with their phrase embeddings trained with infused lexicons allow them to report the best CoNLL 2003 result so far.
The best system of the GermEval 2014 task built an ensemble of classifiers and pattern extractors to find the most likely tag sequence BIBREF48 . They paid special attention to out of vocabulary words which are addressed by semi-supervised word representation features and an ensemble of POS taggers. Furthermore, remaining unknown candidate mentions are tackled by look-up via the Wikipedia API.
Apart from the feature types, the last two columns of Table TABREF13 refer to whether the systems are publicly available and whether any external resources used for training are made available (e.g., induced word embeddings, gazetteers or corpora). This is desirable to be able to re-train the systems on different datasets. For example, we would have been interested in training the Stanford NER system with the full Ancora corpus for the evaluation presented in Table TABREF85 , but their Spanish cluster lexicon is not available. Alternatively, we would have liked to train our system with the same Ancora partition used to train Stanford NER, but that is not available either.
System Description
The design of ixa-pipe-nerc aims at establishing a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations (POS tags, lemmas, syntax, semantics) and/or cascading errors if automatic language processors are used. The underlying motivation is to obtain robust models to facilitate the development of NERC systems for other languages and datasets/domains while obtaining state of the art results. Our system consists of:
Table TABREF24 provides an example of the features generated by our system.
Local Features
The local features constitute our baseline system on top of which the clustering features are added. We implement the following feature set, partially inspired by previous work BIBREF46 :
Token: Current lowercase token (w), namely, ekuadorko in Table TABREF24 .
Token Shape: Current lowercase token (w) plus current token shape (wc), where token shape consist of: (i) The token is either lowercase or a 2 digit word or a 4 digit word; (ii) If the token contains digits, then whether it also contains letters, or slashes, or hyphens, or commas, or periods or is numeric; (iii) The token is all uppercase letters or is an acronym or is a one letter uppercase word or starts with capital letter. Thus, in Table TABREF24 1994an is a 4 digit word (4d), Ekuadorko has an initial capital shape (ic) and hiriburuan is lowercase (lc).
Previous prediction: the previous outcome (pd) for the current token. The previous predictions in our example are null because these words have not been seen previously, except for the comma.
Sentence: Whether the token is the beginning of the sentence. None of the tokens in our example is at the beginning of the sentence, so this feature is not active in Table TABREF24 .
Prefix: Two prefixes consisting of the first three and four characters of the current token: Eku and Ekua.
Suffix: The four suffixes of length one to four from the last four characters of the current token.
Bigram: Bigrams including the current token and the token shape.
Trigram: Trigrams including the current token and the token shape.
Character n-gram: All lowercase character bigrams, trigrams, fourgrams and fivegrams from the current token (ng).
Token, token shape and previous prediction features are placed in a 5 token window, namely, for these these three features we also consider the previous and the next two words, as shown in Table TABREF24 .
Gazetteers
We add gazetteers to our system only if they are readily available to use, but our approach does not fundamentally depend upon them. We perform a look-up in a gazetteer to check if a named entity occurs in the sentence. The result of the look-up is represented with the same encoding chosen for the training process, namely, the BIO or BILOU scheme. Thus, for the current token we add the following features:
The current named entity class in the encoding schema. Thus, in the BILOU encoding we would have “unit”, “beginning”, “last”, “inside”, or if not match is found, “outside”, combined with the specific named entity type (LOC, ORG, PER, MISC, etc.).
The current named entity class as above and the current token.
Clustering Features
The general idea is that by using some type of semantic similarity or word cluster induced over large unlabeled corpora it is possible to improve the predictions for unseen words in the test set. This type of semi-supervised learning may be aimed at improving performance over a fixed amount of training data or, given a fixed target performance level, to establish how much supervised data is actually required to reach such performance BIBREF35 .
So far the most successful approaches have only used one type of word representation BIBREF49 , BIBREF24 , BIBREF31 . However, our simple baseline combined with one type of word representation features are not able to compete with previous, more complex, systems. Thus, instead of encoding more elaborate features, we have devised a simple method to combine and stack various types of clustering features induced over different data sources or corpora. In principle, our method can be used with any type of word representations. However, for comparison purposes, we decided to use word representations previously used in successful NERC approaches: Brown clusters BIBREF31 , BIBREF52 , Word2vec clusters BIBREF49 and Clark clusters BIBREF32 , BIBREF24 . As can be observed in Table TABREF24 , our clustering features are placed in a 5 token window.
The Brown clustering algorithm BIBREF28 is a hierarchical algorithm which clusters words to maximize the mutual information of bigrams. Thus, it is a class-based bigram model in which:
The probability of a document corresponds to the product of the probabilities of its bigrams,
the probability of each bigram is calculated by multiplying the probability of a bigram model over latent classes by the probability of each class generating the actual word types in the bigram, and
each word type has non-zero probability only on a single class.
The Brown algorithm takes a vocabulary of words to be clustered and a corpus of text containing these words. It starts by assigning each word in the vocabulary to its own separate cluster, then iteratively merges the pair of clusters which leads to the smallest decrease in the likelihood of the text corpus. This produces a hierarchical clustering of the words, which is usually represented as a binary tree, as shown in Figure FIGREF44 . In this tree every word is uniquely identified by its path from the root, and the path can be represented by a bit string. It is also possible to choose different levels of word abstraction by choosing different depths along the path from the root to the word. Therefore, by using paths of various lengths, we obtain clustering features of different granularities BIBREF57 .
We use paths of length 4, 6, 10 and 20 as features BIBREF31 . However, we introduce several novelties in the design of our Brown clustering features:
For each feature which is token-based, we add a feature containing the paths computed for the current token. Thus, taking into account our baseline system, we will add the following Brown clustering features:
Brown Token: existing paths of length 4, 6, 10 and 20 for the current token.
Brown Token Shape: existing paths of length 4, 6, 10, 20 for the current token and current token shape.
Brown Bigram: existing paths of length 4, 6, 10, 20 for bigrams including the current token.
Brown clustering features benefit from two additional features:
Previous prediction plus token: the previous prediction (pd) for the current token and the current token.
Previous two predictions: the previous prediction for the current and the previous token.
For space reasons, Table TABREF24 only shows the Brown Token (bt) and Brown Token Shape (c) features for paths of length 4 and 6. We use the publicly available tool implemented by BIBREF50 with default settings. The input consists of a corpus tokenized and segmented one sentence per line, without punctuation. Furthermore, we follow previous work and remove all sentences which consist of less than 90% lowercase characters BIBREF50 , BIBREF52 before inducing the Brown clusters.
BIBREF29 presents a number of unsupervised algorithms, based on distributional and morphological information, for clustering words into classes from unlabeled text. The focus is on clustering infrequent words on a small numbers of clusters from comparatively small amounts of data. In particular, BIBREF29 presents an algorithm combining distributional information with morphological information of words “by composing the Ney-Essen clustering model with a model for the morphology within a Bayesian framework”. The objective is to bias the distributional information to put words that are morphologically similar in the same cluster. We use the code released by BIBREF29 off the shelf to induce Clark clusters using the Ney-Essen with morphological information method. The input of the algorithm is a sequence of lowercase tokens without punctuation, one token per line with sentence breaks.
Our Clark clustering features are very simple: we perform a look-up of the current token in the clustering lexicon. If a match is found, we add as a feature the clustering class, or the lack of match if the token is not found (see Clark-a and Clark-b in Table TABREF24 ).
Another family of language models that produces word representations are the neural language models. These approaches produce representation of words as continuous vectors BIBREF53 , BIBREF54 , also called word embeddings. Nowadays, perhaps the most popular among them is the Skip-gram algorithm BIBREF30 . The Skip-gram algorithm uses shallow log-linear models to compute vector representation of words which are more efficient than previous word representations induced on neural language models. Their objective is to produce word embeddings by computing the probability of each n-gram as the product of the conditional probabilities of each context word in the n-gram conditioned on its central word BIBREF30 .
Instead of using continuous vectors as real numbers, we induce clusters or word classes from the word vectors by applying K-means clustering. In this way we can use the cluster classes as simple binary features by injecting unigram match features. We use the Word2vec tool released by BIBREF30 with a 5 window context to train 50-dimensional word embeddings and to obtain the word clusters on top of them. The input of the algorithm is a corpus tokenized, lowercased, with punctuation removed and in one line. The Word2vec features are implemented exactly like the Clark features.
We successfully combine clustering features from different word representations. Furthermore, we also stack or accumulate features of the same type of word representation induced from different data sources, trusting each clustering lexicon to a different degree, as shown by the five encoded clustering features in Table TABREF24 : two Clark and Word2vec features from different source data and one Brown feature. When using word representations as semi-supervised features for a task like NERC, two principal factors need to be taken into account: (i) the source data or corpus used to induce the word representations and (ii) the actual word representation used to encode our features which in turn modify the weight of our model's parameters in the training process.
For the clustering features to be effective the induced clusters need to contain as many words appearing in the training, development and test sets as possible. This can be achieved by using corpora closely related to the text genre or domain of the data sets or by using very large unlabeled corpora which, although not closely domain-related, be large enough to include many relevant words. For example, with respect to the CoNLL 2003 English dataset an example of the former would be the Reuters corpus while the Wikipedia would be an example of the latter.
The word representations obtained by different algorithms would capture different distributional properties of words in a given corpus or data source. Therefore, each type of clustering would allow us to capture different types of occurring named entity types. In other words, combining and stacking different types of clustering features induced over a variety of data sources should help to capture more similarities between different words in the training and test sets, increasing the contribution to the weights of the model parameters in the training process.
Experimental Results
In this Section we report on the experiments performed with the ixa-pipe-nerc system as described in the previous section. The experiments are performed in 5 languages: Basque, Dutch, English, German and Spanish. For comparison purposes, in-domain results are presented in Section SECREF61 using the most common NERC datasets for each language as summarized in Table TABREF10 . Section SECREF75 analyzes the performance when reducing training data and Section SECREF79 presents eight out-of-domain evaluations for three languages: Dutch, English and Spanish.
The results for Dutch, English and Spanish do not include trigrams and character n-grams in the local featureset described in Section SECREF25 , except for the models in each in-domain evaluation which are marked with “charngram 1:6”.
We also experiment with dictionary features but, in contrast to previous approaches such as BIBREF49 , we only use currently available gazetteers off-the-shelf. For every model marked with “dict” we use the thirty English Illinois NER gazetteers BIBREF31 , irrespective of the target language. Additionally, the English models use six gazetteers about the Global Automotive Industry provided by LexisNexis to the Newsreader project, whereas the German models include, in addition to the Illinois gazetteers, the German dictionaries distributed in the CoNLL 2003 shared task. The gazetteers are collapsed into one large dictionary and deployed as described in Section SECREF35 .
Finally, the clustering features are obtained by processing the following clusters from publicly available corpora: (i) 1000 Brown clusters; (ii) Clark and Word2vec clusters in the 100-600 range. To choose the best combination of clustering features we test the available permutations of Clark and Word2vec clusters with and without the Brown clusters on the development data. Table TABREF58 provides details of every corpus used to induce the clusters. For example, the first row reads: “Reuters RCV1 was used; the original 63 million words were reduced to 35 million after pre-processing for inducing Brown clusters. Clark and Word2vec clusters were trained on the whole corpus”. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF38 .
Every evaluation is carried out using the CoNLL NER evaluation script. The results are obtained with the BILOU encoding for every experimental setting except for German CoNLL 2003.
In-domain evaluation
In this section the results are presented by language. In two cases, Dutch and German, we use two different datasets, making it a total of seven in-domain evaluations.
We tested our system in the highly competitive CoNLL 2003 dataset. Table TABREF63 shows that three of our models outperform previous best results reported for English in the CoNLL 2003 dataset BIBREF49 . Note that the best F1 score (91.36) is obtained by adding trigrams and character n-gram features to the best model (91.18). The results also show that these models improve the baseline provided by the local features by around 7 points in F1 score. The most significant gain is in terms of recall, almost 9 points better than the baseline.
We also report very competitive results, only marginally lower than BIBREF49 , based on the stacking and combination of clustering features as described in Section UID57 . Thus, both best cluster and comp models, based on local plus clustering features only, outperform very competitive and more complex systems such as those of BIBREF31 and BIBREF52 , and obtain only marginally lower results than BIBREF49 . The stacking and combining effect manifests itself very clearly when we compare the single clustering feature models (BR, CW600, W2VG200 and W2VW400) with the light, comp and best cluster models which improve the overall F1 score by 1.30, 1.72 and 1.85 respectively over the best single clustering model (CW600).
It is worth mentioning that our models do not score best in the development data. As the development data is closer in style and genre to the training data BIBREF31 , this may suggest that our system generalizes better on test data that is not close to the training data; indeed, the results reported in Section SECREF79 seem to confirm this hypothesis.
We also compared our results with respect to the best two publicly available English NER systems trained on the same data. We downloaded the Stanford NER system distributed in the 2015-01-30 package. We evaluated their CoNLL model and, while the result is substantially better than their reference paper BIBREF32 , our clustering models obtain better results. The Illinois NER tagger is used by BIBREF31 and BIBREF52 , both of which are outperformed by our system.
We tested our system in the GermEval 2014 dataset. Table TABREF65 compares our results with the best two systems (ExB and UKP) by means of the M3 metric, which separately analyzes the performance in terms of the outer and inner named entity spans. Table TABREF65 makes explicit the significant improvements achieved by the clustering features on top of the baseline system, particularly in terms of recall (almost 11 points in the outer level). The official results of our best configuration (de-cluster-dict) are reported in Table TABREF66 showing that our system marginally improves the best systems' results on that task (ExB and UKP).
We also compare our system, in the last three rows, with the publicly available GermaNER BIBREF26 , which reports results for the 4 main outer level entity types (person, location, organization and other). For this experiment we trained the de-cluster and de-cluster + dict models on the four main classes, improving GermaNER's results by almost 3 F1 points. The GermaNER method of evaluation is interesting because allows researchers to directly compare their systems with a publicly available system trained on GermEval data.
Table TABREF67 compares our German CoNLL 2003 results with the best previous work trained on public data. Our best CoNLL 2003 model obtains results similar to the state of the art performance with respect to the best system published up to date BIBREF24 using public data.
BIBREF24 also report 78.20 F1 with a model trained with Clark clusters induced using the Huge German Corpus (HGC). Unfortunately, the corpus or the induced clusters were not available.
The best system up to date on the CoNLL 2002 dataset, originally published by BIBREF47 , is distributed as part of the Freeling library BIBREF34 . Table TABREF69 lists four models that improve over their reported results, almost by 3 points in F1 measure in the case of the es-cluster model (with our without trigram and character n-gram features).
Despite using clusters from one data source only (see Table TABREF58 ), results in Table TABREF71 show that our nl-cluster model outperforms the best result published on CoNLL 2002 BIBREF45 by 3.83 points in F1 score. Adding the English Illinois NER gazetteers BIBREF31 and trigram and character n-gram features increases the score to 85.04 F1, 5.41 points better than previous published work on this dataset.
We also compared our system with the more recently developed SONAR-1 corpus and the companion NERD system distributed inside its release BIBREF33 . They report 84.91 F1 for the six main named entity types via 10-fold cross validation. For this comparison we chose the local, nl-cluster and nl-cluster-dict configurations from Table TABREF71 and run them on SONAR-1 using the same settings. The results reported in Table TABREF72 shows our system's improvement over previous results on this dataset.
Table TABREF74 reports on the experiments using the Egunkaria NER dataset provided by BIBREF23 . Due to the sparsity of the MISC class mentioned in Section SECREF9 , we decided to train our models on three classes only (location, organization and person). Thus, the results are obtained training our models in the customary manner and evaluating on 3 classes. However, for direct comparison with previous work BIBREF23 , we also evaluate our best eu-cluster model (trained on 3 classes) on 4 classes.
The results show that our eu-cluster model clearly improves upon previous work by 4 points in F1 measure (75.40 vs 71.35). These results are particularly interesting as it had been so far assumed that complex linguistic features and language-specific rules were required to perform well for agglutinative languages such as Basque BIBREF23 . Finally, it is worth noting that the eu-cluster model increases the overall F1 score by 11.72 over the baseline, of which 10 points are gained in precision and 13 in terms of recall.
Reducing training data
So far, we have seen how, given a fixed amount of supervised training data, leveraging unlabeled data using multiple cluster sources helped to obtain state of the art results in seven different in-domain settings for five languages. In this section we will investigate to what extent our system allows to reduce the dependency on supervised training data.
We first use the English CoNLL 2003 dataset for this experiment. The training set consists of around 204K words and we use various smaller versions of it to test the performance of our best cluster model reported in Table TABREF63 . Table TABREF76 displays the F1 results of the baseline system consisting of local features and the best cluster model. The INLINEFORM0 column refers to the gains of our best cluster model with respect to the baseline model for every portion of the training set.
While we have already commented the substantial gains obtained simply by adding our clustering features, it is also interesting to note that the gains are much substantial when less supervised training data is available. Furthermore, it is striking that training our clustering features using only one eight of the training data (30K words) allows to obtain similar performance to the baseline system trained on the full training set. Equally interesting is the fact that cutting by half the training data only marginally harms the overall performance. Finally, training on just a quarter of the training set (60K) results in a very competitive model when compared with other publicly available NER systems for English trained on the full training set: it roughly matches Stanford NER's performance, it outperforms models using external knowledge or non-local features reported by BIBREF31 , and also several models reported by BIBREF52 , which use one type of word representations on top of the baseline system.
We have also re-trained the Illinois NER system BIBREF31 and our best CoNLL 2003 model (en-91-18) for comparison. First, we can observe that for every portion of the training set, both our best cluster and en-91-18 model outperform the Illinois NER system. The best cluster results are noteworthy because, as opposed to Illinois NER, it does not use gazetteers or global features for extra performance.
These results are mirrored by those obtained for the rest of the languages and datasets. Thus, Table TABREF77 displays, for each language, the F1 results of the baseline system and of the best cluster models on top of the baseline. Overall, it confirms that our cluster-based models obtain state of the art results using just one half of the data. Furthermore, using just one quarter of the training data we are able to match results of other publicly available systems for every language, outperforming in some cases, such as Basque, much complex systems of classifiers exploiting linguistic specific rules and features (POS tags, lemmas, semantic information from WordNet, etc.). Considering that Basque is a low-resourced language, it is particularly relevant to be able to reduce as much as possible the amount of gold supervised data required to develop a competitive NERC system.
Out-of-domain evaluations
NERC systems are often used in out-of-domain settings, namely, to annotate data that greatly differs from the data from which the NERC models were learned. These differences can be of text genre and/or domain, but also because the assumptions of what constitutes a named entity might differ. It is therefore interesting to develop robust NERC systems across both domains and datasets. In this section we demonstrate that our approach, consisting of basic, general local features and the combination and stacking of clusters, produces robust NERC systems in three out-of-domain evaluation settings:
Class disagreements: Named entities are assigned to different classes in training and test.
Different text genre: The text genre of training and test data differs.
Annotation guidelines: The gold annotation of the test data follows different guidelines from the training data. This is usually reflected in different named entity spans.
The datasets and languages chosen for these experiments are based on the availability of both previous results and publicly distributed NERC systems to facilitate direct comparison of our system with other approaches. Table TABREF83 specifies the datasets used for each out-of-domain setting and language. Details of each dataset can be found Table TABREF10 .
MUC 7 annotates seven entity types, including four that are not included in CoNLL data: DATE, MONEY, NUMBER and TIME entities. Furthermore, CoNLL includes the MISC class, which was absent in MUC 7. This means that there are class disagreements in the gold standard annotation between the training and test datasets. In addition to the four CoNLL classes, SONAR-1 includes PRODUCT and EVENT whereas Ancora also annotates DATE and NUMBER. For example, consider the following sentence of the MUC 7 gold standard (example taken from BIBREF31 ):
“...baloon, called the Virgin Global Challenger.”
The gold annotation in MUC 7 establishes that there is one named entity:
“...baloon, called [ORG Virgin] Global Challenger.”
However, according to CoNLL 2003 guidelines, the entire name should be annotated like MISC:
“...baloon, called [MISC Virgin Global Challenger].”
In this setting some adjustments are made to the NERC systems' output. Following previous work BIBREF31 , every named entity that is not LOC, ORG, PER or MISC is labeled as `O'. Additionally for MUC 7 every MISC named entity is changed to `O'. For English we used the models reported in Section UID62 . For Spanish and Dutch we trained our system with the Ancora and SONAR-1 corpora using the configurations described in Sections UID68 and UID70 respectively. Table TABREF85 compares our results with previous approaches: using MUC 7, BIBREF52 provide standard phrase results whereas BIBREF31 score token based F1 results, namely, each token is considered a chunk, instead of considering multi-token spans too. For Spanish we use the Stanford NER Spanish model (2015-01-30 version) trained with Ancora. For Dutch we compare our SONAR-1 system with the companion system distributed with the SONAR-1 corpus BIBREF33 . The results are summarized in Table TABREF85 .
In this setting the out-of-domain character is given by the differences in text genre between the English CoNLL 2003 set and the Wikigold corpus. We compare our system with English models trained on large amounts of silver-standard text (3.5M tokens) automatically created from the Wikipedia BIBREF27 . They report results on Wikigold showing that they outperformed their own CoNLL 2003 gold-standard model by 10 points in F1 score. We compare their result with our best cluster model in Table TABREF87 . While the results of our baseline model confirms theirs, our clustering model score is slightly higher. This result is interesting because it is arguably more simple to induce the clusters we use to train ixa-pipe-nerc rather than create the silver standard training set from Wikipedia as described in BIBREF27 .
In this section the objective is studying not so much the differences in textual genre as the influence of substantially different annotation standards. We only use three classes (location, organization and person) to evaluate the best models presented for in-domain evaluations labeling `O' every entity which is not LOC, ORG or PER.
The text genre of MEANTIME is not that different from CoNLL data. However, differences in the gold standard annotation result in significant disagreements regarding the span of the named entities BIBREF59 . For example, the following issues are markedly different with respect to the training data we use for each language:
Different criteria to decide when a named entity is annotated: in the expression “40 billion US air tanker contract” the MEANTIME gold standard does not mark `US' as location, whereas in the training data this is systematically annotated.
Mentions including the definite article within the name entity span: `the United States' versus `United States'.
Longer extents containing common nouns: in the MEANTIME corpus there are many entities such as “United States airframer Boeing”, which in this case is considered an organization, whereas in the training data this span will in general consists of two entities: `United States' as location and `Boeing' as organization.
Common nouns modifying the proper name: `Spokeswoman Sandy Angers' is annotated as a named entity of type PER whereas in the training data used the span of the named entity would usually be `Sandy Angers'.
CoNLL NER phrase based evaluation punishes any bracketing error as both false positive and negative. Thus, these span-related disagreements make this setting extremely hard for models trained according to other annotation guidelines, as shown by Table TABREF93 . Our baseline models degrade around 40 F1 points and the cluster-based models around 35. Other systems' results worsen much more, especially for Spanish and Dutch. The token-based scores are in general better but the proportion in performance between systems across languages is similar.
As an additional experiment, we also tested the English model recommended by Stanford NER which is trained for three classes (LOC, PER, ORG) using a variety of public and (not identified) private corpora (referred to as Stanford NER 3 class (ALL) in Table TABREF94 ). The results with respect to their CoNLL model improved by around 3 points in F1 score across named entity labels and evaluation types (phrase or token based). In view of these results, we experimented with multi-corpora training data added to our best CoNLL 2003 model (en-91-18). Thus, we trained using three public training sets: MUC 7, CoNLL 2003 and Ontonotes 4.0. The local model with the three training sets (Local ALL) improved 12 and 17 points in F1 score across evaluations and entity types, outperforming our best model trained only with CoNLL 2003. Adding the clustering features gained between 2 and 5 points more surpassing the Stanford NER 3 class multi-corpora model in every evaluation. We believe that the main reason to explain these improvements is the variety and quantity of annotations provided by Ontonotes (1M word corpus), and to a lesser extent by MUC 7, which includes some spans containing common nouns and determiners making the model slightly more robust regarding the mention spans.
Discussion
Despite the simplicity of the ixa-pipe-nerc approach, we report best results for English in 4 different datasets: for CoNLL 2003 and for the three English out-of-domain evaluations. For German we improve the results of the best system in the GermEval 2014 task and obtain comparable results to previous work in the CoNLL 2003 dataset using publicly available data. In Spanish we provide results on CoNLL 2002 and in two out-of-domain evaluations clearly outperforming previous best results. For Dutch we improve over previous results in CoNLL 2002 and SONAR-1 data and two out-of-domain evaluations. Finally, for Basque (Egunkaria) the improvements are considerable.
Conclusion and Future Work
We have shown how to develop robust NERC systems across languages and datasets with minimal human intervention, even for languages with inflected named entities. This is based on adequately combining word representation features on top of shallow and general local features. Crucially, we have empirically demonstrate how to effectively combine various types of simple word representation features depending on the source data available. This has resulted in a clear methodology for using the three types of clustering features which produces very competitive results in both in-domain and out-of-domain settings.
Thus, despite the relative simplicity of our approach, we report state of the art results for Dutch, English, German, Spanish and Basque in seven in-domain evaluations.
We also outperform previous work in eight out-of-domain evaluations, showing that our clustering features improve the robustness of NERC systems across datasets. Finally, we have measured how much our system's performance degrades when the amount of supervised data is drastically cut. The results show our models are still very competitive even when reducing the supervised data by half or more. This, together with the lack of linguistic features, facilitates the easy and fast development of NERC systems for new domains or languages.
In future work we would like to explore more the various types of domain adaptation required for robust performance across text genres and domains, perhaps including micro-blog and noisy text such as tweets. Furthermore, we are also planning to adapt our techniques to other sequence labeling problems such as Opinion Target Extraction BIBREF13 , BIBREF14 and Super Sense tagging BIBREF60 .
Acknowledgments
We would like to thank the anonymous reviewers for their comments to improve this paper. We would also like to thank Sebastian Padó for his help training the Clark clusters. This work has been supported by the European projects NewsReader, EC/FP7/316404 and QTLeap - EC/FP7/610516, and by the Spanish Ministry for Science and Innovation (MICINN) SKATER, Grant No. TIN2012-38584-C06-01 and TUNER, TIN2015-65308-C5-1-R. | CoNLL 2003, GermEval 2014, CoNLL 2002, Egunkaria, MUC7, Wikigold, MEANTIME, SONAR-1, Ancora 2.0 |
f463db61de40ae86cf5ddd445783bb34f5f8ab67 | f463db61de40ae86cf5ddd445783bb34f5f8ab67_0 | Q: what are the baselines?
Text: Introduction
A named entity can be mentioned using a great variety of surface forms (Barack Obama, President Obama, Mr. Obama, B. Obama, etc.) and the same surface form can refer to a variety of named entities. For example, according to the English Wikipedia, the form `Europe' can ambiguously be used to refer to 18 different entities, including the continent, the European Union, various Greek mythological entities, a rock band, some music albums, a magazine, a short story, etc. Furthermore, it is possible to refer to a named entity by means of anaphoric pronouns and co-referent expressions such as `he', `her', `their', `I', `the 35 year old', etc. Therefore, in order to provide an adequate and comprehensive account of named entities in text it is necessary to recognize the mention of a named entity and to classify it by a pre-defined type (e.g, person, location, organization). Named Entity Recognition and Classification (NERC) is usually a required step to perform Named Entity Disambiguation (NED), namely to link `Europe' to the right Wikipedia article, and to resolve every form of mentioning or co-referring to the same entity.
Nowadays NERC systems are widely being used in research for tasks such as Coreference Resolution BIBREF0 , Named Entity Disambiguation BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 for which a lot of interest has been created by the TAC KBP shared tasks BIBREF6 , Machine Translation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , Aspect Based Sentiment Analysis BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , Event Extraction BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 and Event Ordering BIBREF20 .
Moreover, NERC systems are integrated in the processing chain of many industrial software applications, mostly by companies offering specific solutions for a particular industrial sector which require recognizing named entities specific of their domain. There is therefore a clear interest in both academic research and industry to develop robust and efficient NERC systems: For industrial vendors it is particularly important to diversify their services by including NLP technology for a variety of languages whereas in academic research NERC is one of the foundations of many other NLP end-tasks.
Most NERC taggers are supervised statistical systems that extract patterns and term features which are considered to be indications of Named Entity (NE) types using the manually annotated training data (extracting orthographic, linguistic and other types of evidence) and often external knowledge resources. As in other NLP tasks, supervised statistical NERC systems are more robust and obtain better performance on available evaluation sets, although sometimes the statistical models can also be combined with specific rules for some NE types. For best performance, supervised statistical approaches require manually annotated training data, which is both expensive and time-consuming. This has seriously hindered the development of robust high performing NERC systems for many languages but also for other domains and text genres BIBREF21 , BIBREF22 , in what we will henceforth call `out-of-domain' evaluations.
Moreover, supervised NERC systems often require fine-tuning for each language and, as some of the features require language-specific knowledge, this poses yet an extra complication for the development of robust multilingual NERC systems. For example, it is well-known that in German every noun is capitalized and that compounds including named entities are pervasive. This also applies to agglutinative languages such as Basque, Korean, Finnish, Japanese, Hungarian or Turkish. For this type of languages, it had usually been assumed that linguistic features (typically Part of Speech (POS) and lemmas, but also semantic features based on WordNet, for example) and perhaps specific hand-crafted rules, were a necessary condition for good NERC performance as they would allow to capture better the most recurrent declensions (cases) of named entities for Basque BIBREF23 or to address problems such as sparsity and capitalization of every noun for German BIBREF24 , BIBREF25 , BIBREF26 . This language dependency was easy to see in the CoNLL 2002 and 2003 tasks, in which systems participating in the two available languages for each edition obtained in general different results for each language. This suggests that without fine-tuning for each corpus and language, the systems did not generalize well across languages BIBREF27 .
This paper presents a multilingual and robust NERC system based on simple, general and shallow features that heavily relies on word representation features for high performance. Even though we do not use linguistic motivated features, our approach also works well for inflected languages such as Basque and German. We demonstrate the robustness of our approach by reporting best results for five languages (Basque, Dutch, German, English and Spanish) on 12 different datasets, including seven in-domain and eight out-of-domain evaluations.
Contributions
The main contributions of this paper are the following: First, we show how to easily develop robust NERC systems across datasets and languages with minimal human intervention, even for languages with declension and/or complex morphology. Second, we empirically show how to effectively use various types of simple word representation features thereby providing a clear methodology for choosing and combining them. Third, we demonstrate that our system still obtains very competitive results even when the supervised data is reduced by half (even less in some cases), alleviating the dependency on costly hand annotated data. These three main contributions are based on:
A simple and shallow robust set of features across languages and datasets, even in out-of-domain evaluations.
The lack of linguistic motivated features, even for languages with agglutinative (e.g., Basque) and/or complex morphology (e.g., German).
A clear methodology for using and combining various types of word representation features by leveraging public unlabeled data.
Our approach consists of shallow local features complemented by three types of word representation (clustering) features: Brown clusters BIBREF28 , Clark clusters BIBREF29 and K-means clusters on top of the word vectors obtained by using the Skip-gram algorithm BIBREF30 . We demonstrate that combining and stacking different clustering features induced from various data sources (Reuters, Wikipedia, Gigaword, etc.) allows to cover different and more varied types of named entities without manual feature tuning. Even though our approach is much simpler than most, we obtain the best results for Dutch, Spanish and English and comparable results in German (on CoNLL 2002 and 2003). We also report best results for German using the GermEval 2014 shared task data and for Basque using the Egunkaria testset BIBREF23 .
We report out-of-domain evaluations in three languages (Dutch, English and Spanish) using four different datasets to compare our system with the best publicly available systems for those languages: Illinois NER BIBREF31 for English, Stanford NER BIBREF32 for English and Spanish, SONAR-1 NERD for Dutch BIBREF33 and Freeling for Spanish BIBREF34 . We outperform every other system in the eight out-of-domain evaluations reported in Section SECREF79 . Furthermore, the out-of-domain results show that our clustering features provide a simple and easy method to improve the robustness of NERC systems.
Finally, and inspired by previous work BIBREF35 , BIBREF36 we measure how much supervision is required to obtain state of the art results. In Section SECREF75 we show that we can still obtain very competitive results reducing the supervised data by half (and sometimes even more). This, together with the lack of linguistic features, means that our system considerably saves data annotation costs, which is quite convenient when trying to develop a NERC system for a new language and/or domain.
Our system learns Perceptron models BIBREF37 using the Machine Learning machinery provided by the Apache OpenNLP project with our own customized (local and clustering) features. Our NERC system is publicly available and distributed under the Apache 2.0 License and part of the IXA pipes tools BIBREF38 . Every result reported in this paper is obtained using the conlleval script from the CoNLL 2002 and CoNLL 2003 shared tasks. To guarantee reproducibility of results we also make publicly available the models and the scripts used to perform the evaluations. The system, models and evaluation scripts can be found in the ixa-pipe-nerc website.
Next Section reviews related work, focusing on best performing NERC systems for each language evaluated on standard shared evaluation task data. Section SECREF3 presents the design of our system and our overall approach to NERC. In Section SECREF4 we report the evaluation results obtained by our system for 5 languages (Basque, Dutch, German, English and Spanish) on 12 different datasets, distributed in 7 in-domain and 8 out-of-domain evaluations. Section SECREF5 discusses the results and contributions of our approach. In Section SECREF6 we highlight the main aspects of our work providing some concluding remarks and future work to be done using our NERC approach applied to other text genres, domains and sequence labeling tasks.
Related Work
The Named Entity Recognition and Classification (NERC) task was first defined for the Sixth Message Understanding Conference (MUC 6) BIBREF39 . The MUC 6 tasks focused on Information Extraction (IE) from unstructured text and NERC was deemed to be an important IE sub-task with the aim of recognizing and classifying nominal mentions of persons, organizations and locations, and also numeric expressions of dates, money, percentage and time. In the following years, research on NERC increased as it was considered to be a crucial source of information for other Natural Language Processing tasks such as Question Answering (QA) and Textual Entailment (RTE) BIBREF39 . Furthermore, while MUC 6 was solely devoted to English as target language, the CoNLL shared tasks (2002 and 2003) boosted research on language independent NERC for 3 additional target languages: Dutch, German and Spanish BIBREF40 , BIBREF41 .
The various MUC, ACE and CoNLL evaluations provided a very convenient framework to test and compare NERC systems, algorithms and approaches. They provided manually annotated data for training and testing the systems as well as an objective evaluation methodology. Using such framework, research rapidly evolved from rule-based approaches (consisting of manually handcrafted rules) to language independent systems focused on learning supervised statistical models. Thus, while in the MUC 6 competition 5 out of 8 systems were rule-based, in CoNLL 2003 16 teams participated in the English task all using statistical-based NERC BIBREF39 .
Datasets
Table TABREF10 describes the 12 datasets used in this paper. The first half lists the corpora used for in-domain evaluation whereas the lower half contains the out-of-domain datasets. The CoNLL NER shared tasks focused on language independent machine learning approaches for 4 entity types: person, location, organization and miscellaneous entities. The 2002 edition provided manually annotated data in Dutch and Spanish whereas in 2003 the languages were German and English. In addition to the CoNLL data, for English we also use the formal run of MUC 7 and Wikigold for out-of-domain evaluation. Very detailed descriptions of CoNLL and MUC data can easily be found in the literature, including the shared task descriptions themselves BIBREF42 , BIBREF40 , BIBREF41 , so in the following we will describe the remaining, newer datasets.
The Wikigold corpus consists of 39K words of English Wikipedia manually annotated following the CoNLL 2003 guidelines BIBREF27 . For Spanish and Dutch, we also use Ancora 2.0 BIBREF43 and SONAR-1 BIBREF33 respectively. SONAR-1 is a one million word Dutch corpus with both coarse-grained and fine-grained named entity annotations. The coarse-grained level includes product and event entity types in addition to the four types defined in CoNLL data. Ancora adds date and number types to the CoNLL four main types. In Basque the only gold standard corpus is Egunkaria BIBREF23 . Although the Basque Egunkaria dataset is annotated with four entity types, the miscellaneous class is extremely sparse, occurring only in a proportion of 1 to 10. Thus, in the training data there are 156 entities annotated as MISC whereas each of the other three classes contain around 1200 entities.
In the datasets described so far, named entities were assumed to be non-recursive and non-overlapping. During the annotation process, if a named entity was embedded in a longer one, then only the longest mention was annotated. The exceptions are the GermEval 2014 shared task data for German and MEANTIME, where nested entities are also annotated (both inner and outer spans).
The GermEval 2014 NER shared task BIBREF25 aimed at improving the state of the art of German NERC which was perceived to be comparatively lower than the English NERC. Two main extensions were introduced in GermEval 2014; (i) fine grained named entity sub-types to indicate derivations and compounds; (ii) embedded entities (and not only the longest span) are annotated. In total, there are 12 types for classification: person, location, organization, other plus their sub-types annotated at their inner and outer levels.
Finally, the MEANTIME corpus BIBREF44 is a multilingual (Dutch, English, Italian and Spanish) publicly available evaluation set annotated within the Newsreader project. It consists of 120 documents, divided into 4 topics: Apple Inc., Airbus and Boeing, General Motors, Chrysler and Ford, and the stock market. The articles are selected in such a way that the corpus contains different articles that deal with the same topic over time (e.g. launch of a new product, discussion of the same financial indexes). Moreover, it contains nested entities so the evaluation results will be provided in terms of the outer and the inner spans of the named entities. MEANTIME includes six named entity types: person, location, organization, product, financial and mixed.
Related Approaches
Named entity recognition is a task with a long history in NLP. Therefore, we will summarize those approaches that are most relevant to our work, especially those we will directly compared with in Section SECREF4 . Since CoNLL shared tasks, the most competitive approaches have been supervised systems learning CRF, SVM, Maximum Entropy or Averaged Perceptron models. In any case, while the machine learning method is important, it has also been demonstrated that good performance might largely be due to the feature set used BIBREF45 . Table TABREF13 provides an overview of the features used by previous best scoring approaches for each of the five languages we address in this paper.
Traditionally, local features have included contextual and orthographic information, affixes, character-based features, prediction history, etc. As argued by the CoNLL 2003 organizers, no feature set was deemed to be ideal for NERC BIBREF41 , although many approaches for English refer to BIBREF46 as a useful general approach.
Some of the CoNLL participants use linguistic information (POS, lemmas, chunks, but also specific rules or patterns) for Dutch and English BIBREF47 , BIBREF45 , although these type of features was deemed to be most important for German, for which the use of linguistic features is pervasive BIBREF25 . This is caused by the sparsity caused by the declension cases, the tendency to form compounds containing named entities and by the capitalization of every noun BIBREF24 . For example, the best system among the 11 participants in GermEval 2014, ExB, uses morphological features and specific suffix lists aimed at capturing frequent patterns in the endings of named entities BIBREF48 .
In agglutinative languages such as Basque, which contains declension cases for named entities, linguistic features are considered to be a requirement. For example, the country name `Espainia' (Spain in Basque) can occur in several forms, Espainian, Espainiera, Espainiak, Espainiarentzat, Espainiako, and many more. Linguistic information has been used to treat this phenomenon. The only previous work for Basque developed Eihera, a rule-based NERC system formalized as finite state transducers to take into account declension classes BIBREF23 . The features of Eihera include word, lemma, POS, declension case, capitalized lemma, etc. These features are complemented with gazetteers extracted from the Euskaldunon Egunkaria newspaper and semantic information from the Basque WordNet.
Dictionaries are widely used to inject world knowledge via gazetteer matches as features in machine learning approaches to NERC. The best performing systems carefully compile their own gazetteers from a variety of sources BIBREF47 . BIBREF31 leverage a collection of 30 gazetteers and matches against each one are weighted as a separate feature. In this way they trust each gazetteer to a different degree. BIBREF49 carefully compiled a large collection of English gazetteers extracted from US Census data and Wikipedia and applied them to the process of inducing word embeddings with very good results.
While it is possible to automatically extract them from various corpora or resources, they still require careful manual inspection of the target data. Thus, our approach only uses off the shelf gazetteers whenever they are publicly available. Furthermore, our method collapses every gazetteer into one dictionary. This means that we only add a feature per token, instead of a feature per token and gazetteer.
The intuition behind non-local (or global) features is to treat similarly all occurrences of the same named entity in a text. BIBREF47 proposed a method to produce the set of named entities for the whole sentence, where the optimal set of named entities for the sentence is the coherent set of named entities which maximizes the summation of confidences of the named entities in the set. BIBREF31 developed three types of non-local features, analyzing global dependencies in a window of between 200 and 1000 tokens.
Semi-supervised approaches leveraging unlabeled text had already been applied to improve results in various NLP tasks. More specifically, it had been previously shown how to apply Brown clusters BIBREF28 for Chinese Word Segmentation BIBREF50 , dependency parsing BIBREF35 , NERC BIBREF51 and POS tagging BIBREF36 .
BIBREF31 used Brown clusters as features obtaining what was at the time the best published result of an English NERC system on the CoNLL 2003 testset. BIBREF52 made a rather exhaustive comparison of Brown clusters, Collobert and Weston's embeddings BIBREF53 and HLBL embeddings BIBREF54 to improve chunking and NERC. They show that in some cases the combination of word representation features was positive but, although they used Ratinov and Roth's (2009) system as starting point, they did not manage to improve over the state of the art. Furthermore, they reported that Brown clustering features performed better than the word embeddings.
BIBREF49 extend the Skip-gram algorithm to learn 50-dimensional lexicon infused phrase embeddings from 22 different gazetteers and the Wikipedia. The resulting embeddings are used as features by scaling them by a hyper-parameter which is a real number tuned on the development data. BIBREF49 report best results up to date for English NERC on CoNLL 2003 test data, 90.90 F1.
The best German CoNLL 2003 system (an ensemble) was outperformed by BIBREF24 . They trained the Stanford NER system BIBREF32 , which uses a linear-chain Conditional Random Field (CRF) with a variety of features, including lemma, POS tag, etc. Crucially, they included “distributional similarity” features in the form of Clark clusters BIBREF29 induced from large unlabeled corpora: the Huge German Corpus (HGC) of around 175M tokens of newspaper text and the deWac corpus BIBREF55 consisting of 1.71B tokens of web-crawled data. Using the clusters induced from deWac as a form of semi-supervision improved the results over the best CoNLL 2003 system by 4 points in F1.
The best participant of the English CoNLL 2003 shared task used the results of two externally trained NERC taggers to create an ensemble system BIBREF56 . BIBREF49 develop a stacked linear-chain CRF system: they train two CRFs with roughly the same features; the second CRF can condition on the predictions made by the first CRF. Their “baseline” system uses a similar local featureset as Ratinov and Roth's (2009) but complemented with gazetteers. Their baseline system combined with their phrase embeddings trained with infused lexicons allow them to report the best CoNLL 2003 result so far.
The best system of the GermEval 2014 task built an ensemble of classifiers and pattern extractors to find the most likely tag sequence BIBREF48 . They paid special attention to out of vocabulary words which are addressed by semi-supervised word representation features and an ensemble of POS taggers. Furthermore, remaining unknown candidate mentions are tackled by look-up via the Wikipedia API.
Apart from the feature types, the last two columns of Table TABREF13 refer to whether the systems are publicly available and whether any external resources used for training are made available (e.g., induced word embeddings, gazetteers or corpora). This is desirable to be able to re-train the systems on different datasets. For example, we would have been interested in training the Stanford NER system with the full Ancora corpus for the evaluation presented in Table TABREF85 , but their Spanish cluster lexicon is not available. Alternatively, we would have liked to train our system with the same Ancora partition used to train Stanford NER, but that is not available either.
System Description
The design of ixa-pipe-nerc aims at establishing a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations (POS tags, lemmas, syntax, semantics) and/or cascading errors if automatic language processors are used. The underlying motivation is to obtain robust models to facilitate the development of NERC systems for other languages and datasets/domains while obtaining state of the art results. Our system consists of:
Table TABREF24 provides an example of the features generated by our system.
Local Features
The local features constitute our baseline system on top of which the clustering features are added. We implement the following feature set, partially inspired by previous work BIBREF46 :
Token: Current lowercase token (w), namely, ekuadorko in Table TABREF24 .
Token Shape: Current lowercase token (w) plus current token shape (wc), where token shape consist of: (i) The token is either lowercase or a 2 digit word or a 4 digit word; (ii) If the token contains digits, then whether it also contains letters, or slashes, or hyphens, or commas, or periods or is numeric; (iii) The token is all uppercase letters or is an acronym or is a one letter uppercase word or starts with capital letter. Thus, in Table TABREF24 1994an is a 4 digit word (4d), Ekuadorko has an initial capital shape (ic) and hiriburuan is lowercase (lc).
Previous prediction: the previous outcome (pd) for the current token. The previous predictions in our example are null because these words have not been seen previously, except for the comma.
Sentence: Whether the token is the beginning of the sentence. None of the tokens in our example is at the beginning of the sentence, so this feature is not active in Table TABREF24 .
Prefix: Two prefixes consisting of the first three and four characters of the current token: Eku and Ekua.
Suffix: The four suffixes of length one to four from the last four characters of the current token.
Bigram: Bigrams including the current token and the token shape.
Trigram: Trigrams including the current token and the token shape.
Character n-gram: All lowercase character bigrams, trigrams, fourgrams and fivegrams from the current token (ng).
Token, token shape and previous prediction features are placed in a 5 token window, namely, for these these three features we also consider the previous and the next two words, as shown in Table TABREF24 .
Gazetteers
We add gazetteers to our system only if they are readily available to use, but our approach does not fundamentally depend upon them. We perform a look-up in a gazetteer to check if a named entity occurs in the sentence. The result of the look-up is represented with the same encoding chosen for the training process, namely, the BIO or BILOU scheme. Thus, for the current token we add the following features:
The current named entity class in the encoding schema. Thus, in the BILOU encoding we would have “unit”, “beginning”, “last”, “inside”, or if not match is found, “outside”, combined with the specific named entity type (LOC, ORG, PER, MISC, etc.).
The current named entity class as above and the current token.
Clustering Features
The general idea is that by using some type of semantic similarity or word cluster induced over large unlabeled corpora it is possible to improve the predictions for unseen words in the test set. This type of semi-supervised learning may be aimed at improving performance over a fixed amount of training data or, given a fixed target performance level, to establish how much supervised data is actually required to reach such performance BIBREF35 .
So far the most successful approaches have only used one type of word representation BIBREF49 , BIBREF24 , BIBREF31 . However, our simple baseline combined with one type of word representation features are not able to compete with previous, more complex, systems. Thus, instead of encoding more elaborate features, we have devised a simple method to combine and stack various types of clustering features induced over different data sources or corpora. In principle, our method can be used with any type of word representations. However, for comparison purposes, we decided to use word representations previously used in successful NERC approaches: Brown clusters BIBREF31 , BIBREF52 , Word2vec clusters BIBREF49 and Clark clusters BIBREF32 , BIBREF24 . As can be observed in Table TABREF24 , our clustering features are placed in a 5 token window.
The Brown clustering algorithm BIBREF28 is a hierarchical algorithm which clusters words to maximize the mutual information of bigrams. Thus, it is a class-based bigram model in which:
The probability of a document corresponds to the product of the probabilities of its bigrams,
the probability of each bigram is calculated by multiplying the probability of a bigram model over latent classes by the probability of each class generating the actual word types in the bigram, and
each word type has non-zero probability only on a single class.
The Brown algorithm takes a vocabulary of words to be clustered and a corpus of text containing these words. It starts by assigning each word in the vocabulary to its own separate cluster, then iteratively merges the pair of clusters which leads to the smallest decrease in the likelihood of the text corpus. This produces a hierarchical clustering of the words, which is usually represented as a binary tree, as shown in Figure FIGREF44 . In this tree every word is uniquely identified by its path from the root, and the path can be represented by a bit string. It is also possible to choose different levels of word abstraction by choosing different depths along the path from the root to the word. Therefore, by using paths of various lengths, we obtain clustering features of different granularities BIBREF57 .
We use paths of length 4, 6, 10 and 20 as features BIBREF31 . However, we introduce several novelties in the design of our Brown clustering features:
For each feature which is token-based, we add a feature containing the paths computed for the current token. Thus, taking into account our baseline system, we will add the following Brown clustering features:
Brown Token: existing paths of length 4, 6, 10 and 20 for the current token.
Brown Token Shape: existing paths of length 4, 6, 10, 20 for the current token and current token shape.
Brown Bigram: existing paths of length 4, 6, 10, 20 for bigrams including the current token.
Brown clustering features benefit from two additional features:
Previous prediction plus token: the previous prediction (pd) for the current token and the current token.
Previous two predictions: the previous prediction for the current and the previous token.
For space reasons, Table TABREF24 only shows the Brown Token (bt) and Brown Token Shape (c) features for paths of length 4 and 6. We use the publicly available tool implemented by BIBREF50 with default settings. The input consists of a corpus tokenized and segmented one sentence per line, without punctuation. Furthermore, we follow previous work and remove all sentences which consist of less than 90% lowercase characters BIBREF50 , BIBREF52 before inducing the Brown clusters.
BIBREF29 presents a number of unsupervised algorithms, based on distributional and morphological information, for clustering words into classes from unlabeled text. The focus is on clustering infrequent words on a small numbers of clusters from comparatively small amounts of data. In particular, BIBREF29 presents an algorithm combining distributional information with morphological information of words “by composing the Ney-Essen clustering model with a model for the morphology within a Bayesian framework”. The objective is to bias the distributional information to put words that are morphologically similar in the same cluster. We use the code released by BIBREF29 off the shelf to induce Clark clusters using the Ney-Essen with morphological information method. The input of the algorithm is a sequence of lowercase tokens without punctuation, one token per line with sentence breaks.
Our Clark clustering features are very simple: we perform a look-up of the current token in the clustering lexicon. If a match is found, we add as a feature the clustering class, or the lack of match if the token is not found (see Clark-a and Clark-b in Table TABREF24 ).
Another family of language models that produces word representations are the neural language models. These approaches produce representation of words as continuous vectors BIBREF53 , BIBREF54 , also called word embeddings. Nowadays, perhaps the most popular among them is the Skip-gram algorithm BIBREF30 . The Skip-gram algorithm uses shallow log-linear models to compute vector representation of words which are more efficient than previous word representations induced on neural language models. Their objective is to produce word embeddings by computing the probability of each n-gram as the product of the conditional probabilities of each context word in the n-gram conditioned on its central word BIBREF30 .
Instead of using continuous vectors as real numbers, we induce clusters or word classes from the word vectors by applying K-means clustering. In this way we can use the cluster classes as simple binary features by injecting unigram match features. We use the Word2vec tool released by BIBREF30 with a 5 window context to train 50-dimensional word embeddings and to obtain the word clusters on top of them. The input of the algorithm is a corpus tokenized, lowercased, with punctuation removed and in one line. The Word2vec features are implemented exactly like the Clark features.
We successfully combine clustering features from different word representations. Furthermore, we also stack or accumulate features of the same type of word representation induced from different data sources, trusting each clustering lexicon to a different degree, as shown by the five encoded clustering features in Table TABREF24 : two Clark and Word2vec features from different source data and one Brown feature. When using word representations as semi-supervised features for a task like NERC, two principal factors need to be taken into account: (i) the source data or corpus used to induce the word representations and (ii) the actual word representation used to encode our features which in turn modify the weight of our model's parameters in the training process.
For the clustering features to be effective the induced clusters need to contain as many words appearing in the training, development and test sets as possible. This can be achieved by using corpora closely related to the text genre or domain of the data sets or by using very large unlabeled corpora which, although not closely domain-related, be large enough to include many relevant words. For example, with respect to the CoNLL 2003 English dataset an example of the former would be the Reuters corpus while the Wikipedia would be an example of the latter.
The word representations obtained by different algorithms would capture different distributional properties of words in a given corpus or data source. Therefore, each type of clustering would allow us to capture different types of occurring named entity types. In other words, combining and stacking different types of clustering features induced over a variety of data sources should help to capture more similarities between different words in the training and test sets, increasing the contribution to the weights of the model parameters in the training process.
Experimental Results
In this Section we report on the experiments performed with the ixa-pipe-nerc system as described in the previous section. The experiments are performed in 5 languages: Basque, Dutch, English, German and Spanish. For comparison purposes, in-domain results are presented in Section SECREF61 using the most common NERC datasets for each language as summarized in Table TABREF10 . Section SECREF75 analyzes the performance when reducing training data and Section SECREF79 presents eight out-of-domain evaluations for three languages: Dutch, English and Spanish.
The results for Dutch, English and Spanish do not include trigrams and character n-grams in the local featureset described in Section SECREF25 , except for the models in each in-domain evaluation which are marked with “charngram 1:6”.
We also experiment with dictionary features but, in contrast to previous approaches such as BIBREF49 , we only use currently available gazetteers off-the-shelf. For every model marked with “dict” we use the thirty English Illinois NER gazetteers BIBREF31 , irrespective of the target language. Additionally, the English models use six gazetteers about the Global Automotive Industry provided by LexisNexis to the Newsreader project, whereas the German models include, in addition to the Illinois gazetteers, the German dictionaries distributed in the CoNLL 2003 shared task. The gazetteers are collapsed into one large dictionary and deployed as described in Section SECREF35 .
Finally, the clustering features are obtained by processing the following clusters from publicly available corpora: (i) 1000 Brown clusters; (ii) Clark and Word2vec clusters in the 100-600 range. To choose the best combination of clustering features we test the available permutations of Clark and Word2vec clusters with and without the Brown clusters on the development data. Table TABREF58 provides details of every corpus used to induce the clusters. For example, the first row reads: “Reuters RCV1 was used; the original 63 million words were reduced to 35 million after pre-processing for inducing Brown clusters. Clark and Word2vec clusters were trained on the whole corpus”. The pre-processing and tokenization is performed with the IXA pipes tools BIBREF38 .
Every evaluation is carried out using the CoNLL NER evaluation script. The results are obtained with the BILOU encoding for every experimental setting except for German CoNLL 2003.
In-domain evaluation
In this section the results are presented by language. In two cases, Dutch and German, we use two different datasets, making it a total of seven in-domain evaluations.
We tested our system in the highly competitive CoNLL 2003 dataset. Table TABREF63 shows that three of our models outperform previous best results reported for English in the CoNLL 2003 dataset BIBREF49 . Note that the best F1 score (91.36) is obtained by adding trigrams and character n-gram features to the best model (91.18). The results also show that these models improve the baseline provided by the local features by around 7 points in F1 score. The most significant gain is in terms of recall, almost 9 points better than the baseline.
We also report very competitive results, only marginally lower than BIBREF49 , based on the stacking and combination of clustering features as described in Section UID57 . Thus, both best cluster and comp models, based on local plus clustering features only, outperform very competitive and more complex systems such as those of BIBREF31 and BIBREF52 , and obtain only marginally lower results than BIBREF49 . The stacking and combining effect manifests itself very clearly when we compare the single clustering feature models (BR, CW600, W2VG200 and W2VW400) with the light, comp and best cluster models which improve the overall F1 score by 1.30, 1.72 and 1.85 respectively over the best single clustering model (CW600).
It is worth mentioning that our models do not score best in the development data. As the development data is closer in style and genre to the training data BIBREF31 , this may suggest that our system generalizes better on test data that is not close to the training data; indeed, the results reported in Section SECREF79 seem to confirm this hypothesis.
We also compared our results with respect to the best two publicly available English NER systems trained on the same data. We downloaded the Stanford NER system distributed in the 2015-01-30 package. We evaluated their CoNLL model and, while the result is substantially better than their reference paper BIBREF32 , our clustering models obtain better results. The Illinois NER tagger is used by BIBREF31 and BIBREF52 , both of which are outperformed by our system.
We tested our system in the GermEval 2014 dataset. Table TABREF65 compares our results with the best two systems (ExB and UKP) by means of the M3 metric, which separately analyzes the performance in terms of the outer and inner named entity spans. Table TABREF65 makes explicit the significant improvements achieved by the clustering features on top of the baseline system, particularly in terms of recall (almost 11 points in the outer level). The official results of our best configuration (de-cluster-dict) are reported in Table TABREF66 showing that our system marginally improves the best systems' results on that task (ExB and UKP).
We also compare our system, in the last three rows, with the publicly available GermaNER BIBREF26 , which reports results for the 4 main outer level entity types (person, location, organization and other). For this experiment we trained the de-cluster and de-cluster + dict models on the four main classes, improving GermaNER's results by almost 3 F1 points. The GermaNER method of evaluation is interesting because allows researchers to directly compare their systems with a publicly available system trained on GermEval data.
Table TABREF67 compares our German CoNLL 2003 results with the best previous work trained on public data. Our best CoNLL 2003 model obtains results similar to the state of the art performance with respect to the best system published up to date BIBREF24 using public data.
BIBREF24 also report 78.20 F1 with a model trained with Clark clusters induced using the Huge German Corpus (HGC). Unfortunately, the corpus or the induced clusters were not available.
The best system up to date on the CoNLL 2002 dataset, originally published by BIBREF47 , is distributed as part of the Freeling library BIBREF34 . Table TABREF69 lists four models that improve over their reported results, almost by 3 points in F1 measure in the case of the es-cluster model (with our without trigram and character n-gram features).
Despite using clusters from one data source only (see Table TABREF58 ), results in Table TABREF71 show that our nl-cluster model outperforms the best result published on CoNLL 2002 BIBREF45 by 3.83 points in F1 score. Adding the English Illinois NER gazetteers BIBREF31 and trigram and character n-gram features increases the score to 85.04 F1, 5.41 points better than previous published work on this dataset.
We also compared our system with the more recently developed SONAR-1 corpus and the companion NERD system distributed inside its release BIBREF33 . They report 84.91 F1 for the six main named entity types via 10-fold cross validation. For this comparison we chose the local, nl-cluster and nl-cluster-dict configurations from Table TABREF71 and run them on SONAR-1 using the same settings. The results reported in Table TABREF72 shows our system's improvement over previous results on this dataset.
Table TABREF74 reports on the experiments using the Egunkaria NER dataset provided by BIBREF23 . Due to the sparsity of the MISC class mentioned in Section SECREF9 , we decided to train our models on three classes only (location, organization and person). Thus, the results are obtained training our models in the customary manner and evaluating on 3 classes. However, for direct comparison with previous work BIBREF23 , we also evaluate our best eu-cluster model (trained on 3 classes) on 4 classes.
The results show that our eu-cluster model clearly improves upon previous work by 4 points in F1 measure (75.40 vs 71.35). These results are particularly interesting as it had been so far assumed that complex linguistic features and language-specific rules were required to perform well for agglutinative languages such as Basque BIBREF23 . Finally, it is worth noting that the eu-cluster model increases the overall F1 score by 11.72 over the baseline, of which 10 points are gained in precision and 13 in terms of recall.
Reducing training data
So far, we have seen how, given a fixed amount of supervised training data, leveraging unlabeled data using multiple cluster sources helped to obtain state of the art results in seven different in-domain settings for five languages. In this section we will investigate to what extent our system allows to reduce the dependency on supervised training data.
We first use the English CoNLL 2003 dataset for this experiment. The training set consists of around 204K words and we use various smaller versions of it to test the performance of our best cluster model reported in Table TABREF63 . Table TABREF76 displays the F1 results of the baseline system consisting of local features and the best cluster model. The INLINEFORM0 column refers to the gains of our best cluster model with respect to the baseline model for every portion of the training set.
While we have already commented the substantial gains obtained simply by adding our clustering features, it is also interesting to note that the gains are much substantial when less supervised training data is available. Furthermore, it is striking that training our clustering features using only one eight of the training data (30K words) allows to obtain similar performance to the baseline system trained on the full training set. Equally interesting is the fact that cutting by half the training data only marginally harms the overall performance. Finally, training on just a quarter of the training set (60K) results in a very competitive model when compared with other publicly available NER systems for English trained on the full training set: it roughly matches Stanford NER's performance, it outperforms models using external knowledge or non-local features reported by BIBREF31 , and also several models reported by BIBREF52 , which use one type of word representations on top of the baseline system.
We have also re-trained the Illinois NER system BIBREF31 and our best CoNLL 2003 model (en-91-18) for comparison. First, we can observe that for every portion of the training set, both our best cluster and en-91-18 model outperform the Illinois NER system. The best cluster results are noteworthy because, as opposed to Illinois NER, it does not use gazetteers or global features for extra performance.
These results are mirrored by those obtained for the rest of the languages and datasets. Thus, Table TABREF77 displays, for each language, the F1 results of the baseline system and of the best cluster models on top of the baseline. Overall, it confirms that our cluster-based models obtain state of the art results using just one half of the data. Furthermore, using just one quarter of the training data we are able to match results of other publicly available systems for every language, outperforming in some cases, such as Basque, much complex systems of classifiers exploiting linguistic specific rules and features (POS tags, lemmas, semantic information from WordNet, etc.). Considering that Basque is a low-resourced language, it is particularly relevant to be able to reduce as much as possible the amount of gold supervised data required to develop a competitive NERC system.
Out-of-domain evaluations
NERC systems are often used in out-of-domain settings, namely, to annotate data that greatly differs from the data from which the NERC models were learned. These differences can be of text genre and/or domain, but also because the assumptions of what constitutes a named entity might differ. It is therefore interesting to develop robust NERC systems across both domains and datasets. In this section we demonstrate that our approach, consisting of basic, general local features and the combination and stacking of clusters, produces robust NERC systems in three out-of-domain evaluation settings:
Class disagreements: Named entities are assigned to different classes in training and test.
Different text genre: The text genre of training and test data differs.
Annotation guidelines: The gold annotation of the test data follows different guidelines from the training data. This is usually reflected in different named entity spans.
The datasets and languages chosen for these experiments are based on the availability of both previous results and publicly distributed NERC systems to facilitate direct comparison of our system with other approaches. Table TABREF83 specifies the datasets used for each out-of-domain setting and language. Details of each dataset can be found Table TABREF10 .
MUC 7 annotates seven entity types, including four that are not included in CoNLL data: DATE, MONEY, NUMBER and TIME entities. Furthermore, CoNLL includes the MISC class, which was absent in MUC 7. This means that there are class disagreements in the gold standard annotation between the training and test datasets. In addition to the four CoNLL classes, SONAR-1 includes PRODUCT and EVENT whereas Ancora also annotates DATE and NUMBER. For example, consider the following sentence of the MUC 7 gold standard (example taken from BIBREF31 ):
“...baloon, called the Virgin Global Challenger.”
The gold annotation in MUC 7 establishes that there is one named entity:
“...baloon, called [ORG Virgin] Global Challenger.”
However, according to CoNLL 2003 guidelines, the entire name should be annotated like MISC:
“...baloon, called [MISC Virgin Global Challenger].”
In this setting some adjustments are made to the NERC systems' output. Following previous work BIBREF31 , every named entity that is not LOC, ORG, PER or MISC is labeled as `O'. Additionally for MUC 7 every MISC named entity is changed to `O'. For English we used the models reported in Section UID62 . For Spanish and Dutch we trained our system with the Ancora and SONAR-1 corpora using the configurations described in Sections UID68 and UID70 respectively. Table TABREF85 compares our results with previous approaches: using MUC 7, BIBREF52 provide standard phrase results whereas BIBREF31 score token based F1 results, namely, each token is considered a chunk, instead of considering multi-token spans too. For Spanish we use the Stanford NER Spanish model (2015-01-30 version) trained with Ancora. For Dutch we compare our SONAR-1 system with the companion system distributed with the SONAR-1 corpus BIBREF33 . The results are summarized in Table TABREF85 .
In this setting the out-of-domain character is given by the differences in text genre between the English CoNLL 2003 set and the Wikigold corpus. We compare our system with English models trained on large amounts of silver-standard text (3.5M tokens) automatically created from the Wikipedia BIBREF27 . They report results on Wikigold showing that they outperformed their own CoNLL 2003 gold-standard model by 10 points in F1 score. We compare their result with our best cluster model in Table TABREF87 . While the results of our baseline model confirms theirs, our clustering model score is slightly higher. This result is interesting because it is arguably more simple to induce the clusters we use to train ixa-pipe-nerc rather than create the silver standard training set from Wikipedia as described in BIBREF27 .
In this section the objective is studying not so much the differences in textual genre as the influence of substantially different annotation standards. We only use three classes (location, organization and person) to evaluate the best models presented for in-domain evaluations labeling `O' every entity which is not LOC, ORG or PER.
The text genre of MEANTIME is not that different from CoNLL data. However, differences in the gold standard annotation result in significant disagreements regarding the span of the named entities BIBREF59 . For example, the following issues are markedly different with respect to the training data we use for each language:
Different criteria to decide when a named entity is annotated: in the expression “40 billion US air tanker contract” the MEANTIME gold standard does not mark `US' as location, whereas in the training data this is systematically annotated.
Mentions including the definite article within the name entity span: `the United States' versus `United States'.
Longer extents containing common nouns: in the MEANTIME corpus there are many entities such as “United States airframer Boeing”, which in this case is considered an organization, whereas in the training data this span will in general consists of two entities: `United States' as location and `Boeing' as organization.
Common nouns modifying the proper name: `Spokeswoman Sandy Angers' is annotated as a named entity of type PER whereas in the training data used the span of the named entity would usually be `Sandy Angers'.
CoNLL NER phrase based evaluation punishes any bracketing error as both false positive and negative. Thus, these span-related disagreements make this setting extremely hard for models trained according to other annotation guidelines, as shown by Table TABREF93 . Our baseline models degrade around 40 F1 points and the cluster-based models around 35. Other systems' results worsen much more, especially for Spanish and Dutch. The token-based scores are in general better but the proportion in performance between systems across languages is similar.
As an additional experiment, we also tested the English model recommended by Stanford NER which is trained for three classes (LOC, PER, ORG) using a variety of public and (not identified) private corpora (referred to as Stanford NER 3 class (ALL) in Table TABREF94 ). The results with respect to their CoNLL model improved by around 3 points in F1 score across named entity labels and evaluation types (phrase or token based). In view of these results, we experimented with multi-corpora training data added to our best CoNLL 2003 model (en-91-18). Thus, we trained using three public training sets: MUC 7, CoNLL 2003 and Ontonotes 4.0. The local model with the three training sets (Local ALL) improved 12 and 17 points in F1 score across evaluations and entity types, outperforming our best model trained only with CoNLL 2003. Adding the clustering features gained between 2 and 5 points more surpassing the Stanford NER 3 class multi-corpora model in every evaluation. We believe that the main reason to explain these improvements is the variety and quantity of annotations provided by Ontonotes (1M word corpus), and to a lesser extent by MUC 7, which includes some spans containing common nouns and determiners making the model slightly more robust regarding the mention spans.
Discussion
Despite the simplicity of the ixa-pipe-nerc approach, we report best results for English in 4 different datasets: for CoNLL 2003 and for the three English out-of-domain evaluations. For German we improve the results of the best system in the GermEval 2014 task and obtain comparable results to previous work in the CoNLL 2003 dataset using publicly available data. In Spanish we provide results on CoNLL 2002 and in two out-of-domain evaluations clearly outperforming previous best results. For Dutch we improve over previous results in CoNLL 2002 and SONAR-1 data and two out-of-domain evaluations. Finally, for Basque (Egunkaria) the improvements are considerable.
Conclusion and Future Work
We have shown how to develop robust NERC systems across languages and datasets with minimal human intervention, even for languages with inflected named entities. This is based on adequately combining word representation features on top of shallow and general local features. Crucially, we have empirically demonstrate how to effectively combine various types of simple word representation features depending on the source data available. This has resulted in a clear methodology for using the three types of clustering features which produces very competitive results in both in-domain and out-of-domain settings.
Thus, despite the relative simplicity of our approach, we report state of the art results for Dutch, English, German, Spanish and Basque in seven in-domain evaluations.
We also outperform previous work in eight out-of-domain evaluations, showing that our clustering features improve the robustness of NERC systems across datasets. Finally, we have measured how much our system's performance degrades when the amount of supervised data is drastically cut. The results show our models are still very competitive even when reducing the supervised data by half or more. This, together with the lack of linguistic features, facilitates the easy and fast development of NERC systems for new domains or languages.
In future work we would like to explore more the various types of domain adaptation required for robust performance across text genres and domains, perhaps including micro-blog and noisy text such as tweets. Furthermore, we are also planning to adapt our techniques to other sequence labeling problems such as Opinion Target Extraction BIBREF13 , BIBREF14 and Super Sense tagging BIBREF60 .
Acknowledgments
We would like to thank the anonymous reviewers for their comments to improve this paper. We would also like to thank Sebastian Padó for his help training the Clark clusters. This work has been supported by the European projects NewsReader, EC/FP7/316404 and QTLeap - EC/FP7/610516, and by the Spanish Ministry for Science and Innovation (MICINN) SKATER, Grant No. TIN2012-38584-C06-01 and TUNER, TIN2015-65308-C5-1-R. | Perceptron model using the local features. |
3d7ab856a5cade7ab374fc2f2713a4d0a30bbd56 | 3d7ab856a5cade7ab374fc2f2713a4d0a30bbd56_0 | Q: What multilingual word representations are used?
Text: Motivations
Figurative language makes use of figures of speech to convey non-literal meaning BIBREF0, BIBREF1. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm.
Irony detection (ID) has gained relevance recently, due to its importance to extract information from texts. For example, to go beyond the literal matches of user queries, Veale enriched information retrieval with new operators to enable the non-literal retrieval of creative expressions BIBREF2. Also, the performances of sentiment analysis systems drastically decrease when applied to ironic texts BIBREF3, BIBREF4. Most related work concern English BIBREF5, BIBREF6 with some efforts in French BIBREF7, Portuguese BIBREF8, Italian BIBREF9, Dutch BIBREF10, Hindi BIBREF11, Spanish variants BIBREF12 and Arabic BIBREF13, BIBREF14. Bilingual ID with one model per language has also been explored, like English-Czech BIBREF15 and English-Chinese BIBREF16, but not within a cross-lingual perspective.
In social media, such as Twitter, specific hashtags (#irony, #sarcasm) are often used as gold labels to detect irony in a supervised learning setting. Although recent studies pointed out the issue of false-alarm hashtags in self-labeled data BIBREF17, ID via hashtag filtering provides researchers positive examples with high precision. On the other hand, systems are not able to detect irony in languages where such filtering is not always possible. Multilingual prediction (either relying on machine translation or multilingual embedding methods) is a common solution to tackle under-resourced languages BIBREF18, BIBREF19. While multilinguality has been widely investigated in information retrieval BIBREF20, BIBREF21 and several NLP tasks (e.g., sentiment analysis BIBREF22, BIBREF23 and named entity recognition BIBREF24), no one explored it for irony.
We aim here to bridge the gap by tackling ID in tweets from both multilingual (French, English and Arabic) and multicultural perspectives (Indo-European languages whose speakers share quite the same cultural background vs. less culturally close languages). Our approach does not rely either on machine translation or parallel corpora (which are not always available), but rather builds on previous corpus-based studies that show that irony is a universal phenomenon and many languages share similar irony devices. For example, Karoui et. al BIBREF25 concluded that their multi-layer annotated schema, initially used to annotate French tweets, is portable to English and Italian, observing relatively the same tendencies in terms of irony categories and markers. Similarly, Chakhachiro BIBREF26 studies irony in English and Arabic, and shows that both languages share several similarities in the rhetorical (e.g., overstatement), grammatical (e.g., redundancy) and lexical (e.g., synonymy) usage of irony devices. The next step now is to show to what extent these observations are still valid from a computational point of view. Our contributions are:
A new freely available corpus of Arabic tweets manually annotated for irony detection.
Monolingual ID: We propose both feature-based models (relying on language-dependent and language-independent features) and neural models to measure to what extent ID is language dependent.
Cross-lingual ID: We experiment using cross-lingual word representation by training on one language and testing on another one to measure how the proposed models are culture-dependent. Our results are encouraging and open the door to ID in languages that lack of annotated data for irony.
Data
Arabic dataset (Ar=$11,225$ tweets). Our starting point was the corpus built by BIBREF13 that we extended to different political issues and events related to the Middle East and Maghreb that hold during the years 2011 to 2018. Tweets were collected using a set of predefined keywords (which targeted specific political figures or events) and containing or not Arabic ironic hashtags (سخرية>#, مسخرة>#, تهكم>#, استهزاء>#) . The collection process resulted in a set of $6,809$ ironic tweets ($I$) vs. $15,509$ non ironic ($NI$) written using standard (formal) and different Arabic language varieties: Egypt, Gulf, Levantine, and Maghrebi dialects.
To investigate the validity of using the original tweets labels, a sample of $3,000$ $I$ and $3,000$ $NI$ was manually annotated by two Arabic native speakers which resulted in $2,636$ $I$ vs. $2,876$ $NI$. The inter-annotator agreement using Cohen's Kappa was $0.76$, while the agreement score between the annotators' labels and the original labels was $0.6$. Agreements being relatively good knowing the difficulty of the task, we sampled $5,713$ instances from the original unlabeled dataset to our manually labeled part. The added tweets have been manually checked to remove duplicates, very short tweets and tweets that depend on external links, images or videos to understand their meaning.
French dataset (Fr=$7,307$ tweets). We rely on the corpus used for the DEFT 2017 French shared task on irony BIBREF3 which consists of tweets relative to a set of topics discussed in the media between 2014 and 2016 and contains topic keywords and/or French irony hashtags (#ironie, #sarcasme). Tweets have been annotated by three annotators (after removing the original labels) with a reported Cohen's Kappa of $0.69$.
English dataset (En=$11,225$ tweets). We use the corpus built by BIBREF15 which consists of $100,000$ tweets collected using the hashtag #sarcasm. It was used as benchmark in several works BIBREF27, BIBREF28. We sliced a subset of approximately $11,200$ tweets to match the sizes of the other languages' datasets.
Table TABREF6 shows the tweet distribution in all corpora. Across the three languages, we keep a similar number of instances for train and test sets to have fair cross-lingual experiments as well (see Section SECREF4). Also, for French, we use the original dataset without any modification, keeping the same number of records for train and test to better compare with state-of-the-art results. For the classes distribution (ironic vs. non ironic), we do not choose a specific ratio but we use the resulted distribution from the random shuffling process.
Monolingual Irony Detection
It is important to note that our aim is not to outperform state-of-the-art models in monolingual ID but to investigate which of the monolingual architectures (neural or feature-based) can achieve comparable results with existing systems. The result can show which kind of features works better in the monolingual settings and can be employed to detect irony in a multilingual setting. In addition, it can show us to what extend ID is language dependent by comparing their results to multilingual results. Two models have been built, as explained below. Prior to learning, basic preprocessing steps were performed for each language (e.g., removing foreign characters, ironic hashtags, mentions, and URLs).
Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library.
Results. Table TABREF9 shows the results obtained when using train-test configurations for each language. For English, our results, in terms of macro F-score ($F$), were not comparable to those of BIBREF15, BIBREF33, as we used 11% of the original dataset. For French, our scores are in line with those reported in state of the art (cf. best system in the irony shared task achieved $F=78.3$ BIBREF3). They outperform those obtained for Arabic ($A=71.7$) BIBREF13 and are comparable to those recently reported in the irony detection shared task in Arabic tweets BIBREF14, BIBREF34 ($F=84.4$). Overall, the results show that semantic-based information captured by the embedding space are more productive comparing to standard surface and lexicon-based features.
Cross-lingual Irony Detection
We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries BIBREF35 or unsupervised methods relying on monolingual corpora BIBREF36, BIBREF37, BIBREF38. For our experiments, we use Conneau et al 's approach as it showed superior results with respect to the literature BIBREF36. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table TABREF11 presents the results.
From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the $Ar\longrightarrow En$ configuration. It is worthy to mention that the highest result we get in this experiment is from the En$\rightarrow $Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\rightarrow $Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\rightarrow $(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets.
Discussions and Conclusion
This paper proposes the first multilingual ID in tweets. We show that simple monolingual architectures (either neural or feature-based) trained separately on each language can be successfully used in a multilingual setting providing a cross-lingual word representation or basic surface features. Our monolingual results are comparable to state of the art for the three languages. The CNN architecture trained on cross-lingual word representation shows that irony has a certain similarity between the languages we targeted despite the cultural differences which confirm that irony is a universal phenomena, as already shown in previous linguistic studies BIBREF39, BIBREF25, BIBREF40. The manual analysis of the common misclassified tweets across the languages in the multilingual setup, shows that classification errors are due to three main factors. (1) First, the absence of context where writers did not provide sufficient information to capture the ironic sense even in the monolingual setting, as in نبدا تاني يسقط يسقط حسني مبارك !! > (Let's start again, get off get off Mubarak!!) where the writer mocks the Egyptian revolution, as the actual president "Sisi" is viewed as Mubarak's fellows. (2) Second, the presence of out of vocabulary (OOV) terms because of the weak coverage of the mutlilingual embeddings which make the system fails to generalize when the OOV set of unseen words is large during the training process. We found tweets in all the three languages written in a very informal way, where some characters of the words were deleted, duplicated or written phonetically (e.g phat instead of fat). (3) Another important issue is the difficulty to deal with the Arabic language. Arabic tweets are often characterized by non-diacritised texts, a large variations of unstandardized dialectal Arabic (recall that our dataset has 4 main varieties, namely Egypt, Gulf, Levantine, and Maghrebi), presence of transliterated words (e.g. the word table becomes طابلة> (tabla)), and finally linguistic code switching between Modern Standard Arabic and several dialects, and between Arabic and other languages like English and French. We found some tweets contain only words from one of the varieties and most of these words do not exist in the Arabic embeddings model. For example in مبارك بقاله كام يوم مامتش .. هو عيان ولاه ايه #مصر > (Since many days Mubarak didn't die .. is he sick or what? #Egypt), only the words يوم> (day), مبارك> (Mubarak), and هو> (he) exist in the embeddings. Clearly, considering only these three available words, we are not able to understand the context or the ironic meaning of the tweet. To conclude, our multilingual experiments confirmed that the door is open towards multilingual approaches for ID. Furthermore, our results showed that ID can be applied to languages that lack of annotated data. Our next step is to experiment with other languages such as Hindi and Italian.
Acknowledgment
The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE (PGC2018-096212-B-C31). | a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space |
212977344f4bf2ae8f060bdac0317db2d1801724 | 212977344f4bf2ae8f060bdac0317db2d1801724_0 | Q: Do the authors identify any cultural differences in irony use?
Text: Motivations
Figurative language makes use of figures of speech to convey non-literal meaning BIBREF0, BIBREF1. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm.
Irony detection (ID) has gained relevance recently, due to its importance to extract information from texts. For example, to go beyond the literal matches of user queries, Veale enriched information retrieval with new operators to enable the non-literal retrieval of creative expressions BIBREF2. Also, the performances of sentiment analysis systems drastically decrease when applied to ironic texts BIBREF3, BIBREF4. Most related work concern English BIBREF5, BIBREF6 with some efforts in French BIBREF7, Portuguese BIBREF8, Italian BIBREF9, Dutch BIBREF10, Hindi BIBREF11, Spanish variants BIBREF12 and Arabic BIBREF13, BIBREF14. Bilingual ID with one model per language has also been explored, like English-Czech BIBREF15 and English-Chinese BIBREF16, but not within a cross-lingual perspective.
In social media, such as Twitter, specific hashtags (#irony, #sarcasm) are often used as gold labels to detect irony in a supervised learning setting. Although recent studies pointed out the issue of false-alarm hashtags in self-labeled data BIBREF17, ID via hashtag filtering provides researchers positive examples with high precision. On the other hand, systems are not able to detect irony in languages where such filtering is not always possible. Multilingual prediction (either relying on machine translation or multilingual embedding methods) is a common solution to tackle under-resourced languages BIBREF18, BIBREF19. While multilinguality has been widely investigated in information retrieval BIBREF20, BIBREF21 and several NLP tasks (e.g., sentiment analysis BIBREF22, BIBREF23 and named entity recognition BIBREF24), no one explored it for irony.
We aim here to bridge the gap by tackling ID in tweets from both multilingual (French, English and Arabic) and multicultural perspectives (Indo-European languages whose speakers share quite the same cultural background vs. less culturally close languages). Our approach does not rely either on machine translation or parallel corpora (which are not always available), but rather builds on previous corpus-based studies that show that irony is a universal phenomenon and many languages share similar irony devices. For example, Karoui et. al BIBREF25 concluded that their multi-layer annotated schema, initially used to annotate French tweets, is portable to English and Italian, observing relatively the same tendencies in terms of irony categories and markers. Similarly, Chakhachiro BIBREF26 studies irony in English and Arabic, and shows that both languages share several similarities in the rhetorical (e.g., overstatement), grammatical (e.g., redundancy) and lexical (e.g., synonymy) usage of irony devices. The next step now is to show to what extent these observations are still valid from a computational point of view. Our contributions are:
A new freely available corpus of Arabic tweets manually annotated for irony detection.
Monolingual ID: We propose both feature-based models (relying on language-dependent and language-independent features) and neural models to measure to what extent ID is language dependent.
Cross-lingual ID: We experiment using cross-lingual word representation by training on one language and testing on another one to measure how the proposed models are culture-dependent. Our results are encouraging and open the door to ID in languages that lack of annotated data for irony.
Data
Arabic dataset (Ar=$11,225$ tweets). Our starting point was the corpus built by BIBREF13 that we extended to different political issues and events related to the Middle East and Maghreb that hold during the years 2011 to 2018. Tweets were collected using a set of predefined keywords (which targeted specific political figures or events) and containing or not Arabic ironic hashtags (سخرية>#, مسخرة>#, تهكم>#, استهزاء>#) . The collection process resulted in a set of $6,809$ ironic tweets ($I$) vs. $15,509$ non ironic ($NI$) written using standard (formal) and different Arabic language varieties: Egypt, Gulf, Levantine, and Maghrebi dialects.
To investigate the validity of using the original tweets labels, a sample of $3,000$ $I$ and $3,000$ $NI$ was manually annotated by two Arabic native speakers which resulted in $2,636$ $I$ vs. $2,876$ $NI$. The inter-annotator agreement using Cohen's Kappa was $0.76$, while the agreement score between the annotators' labels and the original labels was $0.6$. Agreements being relatively good knowing the difficulty of the task, we sampled $5,713$ instances from the original unlabeled dataset to our manually labeled part. The added tweets have been manually checked to remove duplicates, very short tweets and tweets that depend on external links, images or videos to understand their meaning.
French dataset (Fr=$7,307$ tweets). We rely on the corpus used for the DEFT 2017 French shared task on irony BIBREF3 which consists of tweets relative to a set of topics discussed in the media between 2014 and 2016 and contains topic keywords and/or French irony hashtags (#ironie, #sarcasme). Tweets have been annotated by three annotators (after removing the original labels) with a reported Cohen's Kappa of $0.69$.
English dataset (En=$11,225$ tweets). We use the corpus built by BIBREF15 which consists of $100,000$ tweets collected using the hashtag #sarcasm. It was used as benchmark in several works BIBREF27, BIBREF28. We sliced a subset of approximately $11,200$ tweets to match the sizes of the other languages' datasets.
Table TABREF6 shows the tweet distribution in all corpora. Across the three languages, we keep a similar number of instances for train and test sets to have fair cross-lingual experiments as well (see Section SECREF4). Also, for French, we use the original dataset without any modification, keeping the same number of records for train and test to better compare with state-of-the-art results. For the classes distribution (ironic vs. non ironic), we do not choose a specific ratio but we use the resulted distribution from the random shuffling process.
Monolingual Irony Detection
It is important to note that our aim is not to outperform state-of-the-art models in monolingual ID but to investigate which of the monolingual architectures (neural or feature-based) can achieve comparable results with existing systems. The result can show which kind of features works better in the monolingual settings and can be employed to detect irony in a multilingual setting. In addition, it can show us to what extend ID is language dependent by comparing their results to multilingual results. Two models have been built, as explained below. Prior to learning, basic preprocessing steps were performed for each language (e.g., removing foreign characters, ironic hashtags, mentions, and URLs).
Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library.
Results. Table TABREF9 shows the results obtained when using train-test configurations for each language. For English, our results, in terms of macro F-score ($F$), were not comparable to those of BIBREF15, BIBREF33, as we used 11% of the original dataset. For French, our scores are in line with those reported in state of the art (cf. best system in the irony shared task achieved $F=78.3$ BIBREF3). They outperform those obtained for Arabic ($A=71.7$) BIBREF13 and are comparable to those recently reported in the irony detection shared task in Arabic tweets BIBREF14, BIBREF34 ($F=84.4$). Overall, the results show that semantic-based information captured by the embedding space are more productive comparing to standard surface and lexicon-based features.
Cross-lingual Irony Detection
We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries BIBREF35 or unsupervised methods relying on monolingual corpora BIBREF36, BIBREF37, BIBREF38. For our experiments, we use Conneau et al 's approach as it showed superior results with respect to the literature BIBREF36. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table TABREF11 presents the results.
From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the $Ar\longrightarrow En$ configuration. It is worthy to mention that the highest result we get in this experiment is from the En$\rightarrow $Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\rightarrow $Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\rightarrow $(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets.
Discussions and Conclusion
This paper proposes the first multilingual ID in tweets. We show that simple monolingual architectures (either neural or feature-based) trained separately on each language can be successfully used in a multilingual setting providing a cross-lingual word representation or basic surface features. Our monolingual results are comparable to state of the art for the three languages. The CNN architecture trained on cross-lingual word representation shows that irony has a certain similarity between the languages we targeted despite the cultural differences which confirm that irony is a universal phenomena, as already shown in previous linguistic studies BIBREF39, BIBREF25, BIBREF40. The manual analysis of the common misclassified tweets across the languages in the multilingual setup, shows that classification errors are due to three main factors. (1) First, the absence of context where writers did not provide sufficient information to capture the ironic sense even in the monolingual setting, as in نبدا تاني يسقط يسقط حسني مبارك !! > (Let's start again, get off get off Mubarak!!) where the writer mocks the Egyptian revolution, as the actual president "Sisi" is viewed as Mubarak's fellows. (2) Second, the presence of out of vocabulary (OOV) terms because of the weak coverage of the mutlilingual embeddings which make the system fails to generalize when the OOV set of unseen words is large during the training process. We found tweets in all the three languages written in a very informal way, where some characters of the words were deleted, duplicated or written phonetically (e.g phat instead of fat). (3) Another important issue is the difficulty to deal with the Arabic language. Arabic tweets are often characterized by non-diacritised texts, a large variations of unstandardized dialectal Arabic (recall that our dataset has 4 main varieties, namely Egypt, Gulf, Levantine, and Maghrebi), presence of transliterated words (e.g. the word table becomes طابلة> (tabla)), and finally linguistic code switching between Modern Standard Arabic and several dialects, and between Arabic and other languages like English and French. We found some tweets contain only words from one of the varieties and most of these words do not exist in the Arabic embeddings model. For example in مبارك بقاله كام يوم مامتش .. هو عيان ولاه ايه #مصر > (Since many days Mubarak didn't die .. is he sick or what? #Egypt), only the words يوم> (day), مبارك> (Mubarak), and هو> (he) exist in the embeddings. Clearly, considering only these three available words, we are not able to understand the context or the ironic meaning of the tweet. To conclude, our multilingual experiments confirmed that the door is open towards multilingual approaches for ID. Furthermore, our results showed that ID can be applied to languages that lack of annotated data. Our next step is to experiment with other languages such as Hindi and Italian.
Acknowledgment
The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE (PGC2018-096212-B-C31). | No |
0c29d08f766b06ceb2421aa402e71a2d65a5a381 | 0c29d08f766b06ceb2421aa402e71a2d65a5a381_0 | Q: What neural architectures are used?
Text: Motivations
Figurative language makes use of figures of speech to convey non-literal meaning BIBREF0, BIBREF1. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm.
Irony detection (ID) has gained relevance recently, due to its importance to extract information from texts. For example, to go beyond the literal matches of user queries, Veale enriched information retrieval with new operators to enable the non-literal retrieval of creative expressions BIBREF2. Also, the performances of sentiment analysis systems drastically decrease when applied to ironic texts BIBREF3, BIBREF4. Most related work concern English BIBREF5, BIBREF6 with some efforts in French BIBREF7, Portuguese BIBREF8, Italian BIBREF9, Dutch BIBREF10, Hindi BIBREF11, Spanish variants BIBREF12 and Arabic BIBREF13, BIBREF14. Bilingual ID with one model per language has also been explored, like English-Czech BIBREF15 and English-Chinese BIBREF16, but not within a cross-lingual perspective.
In social media, such as Twitter, specific hashtags (#irony, #sarcasm) are often used as gold labels to detect irony in a supervised learning setting. Although recent studies pointed out the issue of false-alarm hashtags in self-labeled data BIBREF17, ID via hashtag filtering provides researchers positive examples with high precision. On the other hand, systems are not able to detect irony in languages where such filtering is not always possible. Multilingual prediction (either relying on machine translation or multilingual embedding methods) is a common solution to tackle under-resourced languages BIBREF18, BIBREF19. While multilinguality has been widely investigated in information retrieval BIBREF20, BIBREF21 and several NLP tasks (e.g., sentiment analysis BIBREF22, BIBREF23 and named entity recognition BIBREF24), no one explored it for irony.
We aim here to bridge the gap by tackling ID in tweets from both multilingual (French, English and Arabic) and multicultural perspectives (Indo-European languages whose speakers share quite the same cultural background vs. less culturally close languages). Our approach does not rely either on machine translation or parallel corpora (which are not always available), but rather builds on previous corpus-based studies that show that irony is a universal phenomenon and many languages share similar irony devices. For example, Karoui et. al BIBREF25 concluded that their multi-layer annotated schema, initially used to annotate French tweets, is portable to English and Italian, observing relatively the same tendencies in terms of irony categories and markers. Similarly, Chakhachiro BIBREF26 studies irony in English and Arabic, and shows that both languages share several similarities in the rhetorical (e.g., overstatement), grammatical (e.g., redundancy) and lexical (e.g., synonymy) usage of irony devices. The next step now is to show to what extent these observations are still valid from a computational point of view. Our contributions are:
A new freely available corpus of Arabic tweets manually annotated for irony detection.
Monolingual ID: We propose both feature-based models (relying on language-dependent and language-independent features) and neural models to measure to what extent ID is language dependent.
Cross-lingual ID: We experiment using cross-lingual word representation by training on one language and testing on another one to measure how the proposed models are culture-dependent. Our results are encouraging and open the door to ID in languages that lack of annotated data for irony.
Data
Arabic dataset (Ar=$11,225$ tweets). Our starting point was the corpus built by BIBREF13 that we extended to different political issues and events related to the Middle East and Maghreb that hold during the years 2011 to 2018. Tweets were collected using a set of predefined keywords (which targeted specific political figures or events) and containing or not Arabic ironic hashtags (سخرية>#, مسخرة>#, تهكم>#, استهزاء>#) . The collection process resulted in a set of $6,809$ ironic tweets ($I$) vs. $15,509$ non ironic ($NI$) written using standard (formal) and different Arabic language varieties: Egypt, Gulf, Levantine, and Maghrebi dialects.
To investigate the validity of using the original tweets labels, a sample of $3,000$ $I$ and $3,000$ $NI$ was manually annotated by two Arabic native speakers which resulted in $2,636$ $I$ vs. $2,876$ $NI$. The inter-annotator agreement using Cohen's Kappa was $0.76$, while the agreement score between the annotators' labels and the original labels was $0.6$. Agreements being relatively good knowing the difficulty of the task, we sampled $5,713$ instances from the original unlabeled dataset to our manually labeled part. The added tweets have been manually checked to remove duplicates, very short tweets and tweets that depend on external links, images or videos to understand their meaning.
French dataset (Fr=$7,307$ tweets). We rely on the corpus used for the DEFT 2017 French shared task on irony BIBREF3 which consists of tweets relative to a set of topics discussed in the media between 2014 and 2016 and contains topic keywords and/or French irony hashtags (#ironie, #sarcasme). Tweets have been annotated by three annotators (after removing the original labels) with a reported Cohen's Kappa of $0.69$.
English dataset (En=$11,225$ tweets). We use the corpus built by BIBREF15 which consists of $100,000$ tweets collected using the hashtag #sarcasm. It was used as benchmark in several works BIBREF27, BIBREF28. We sliced a subset of approximately $11,200$ tweets to match the sizes of the other languages' datasets.
Table TABREF6 shows the tweet distribution in all corpora. Across the three languages, we keep a similar number of instances for train and test sets to have fair cross-lingual experiments as well (see Section SECREF4). Also, for French, we use the original dataset without any modification, keeping the same number of records for train and test to better compare with state-of-the-art results. For the classes distribution (ironic vs. non ironic), we do not choose a specific ratio but we use the resulted distribution from the random shuffling process.
Monolingual Irony Detection
It is important to note that our aim is not to outperform state-of-the-art models in monolingual ID but to investigate which of the monolingual architectures (neural or feature-based) can achieve comparable results with existing systems. The result can show which kind of features works better in the monolingual settings and can be employed to detect irony in a multilingual setting. In addition, it can show us to what extend ID is language dependent by comparing their results to multilingual results. Two models have been built, as explained below. Prior to learning, basic preprocessing steps were performed for each language (e.g., removing foreign characters, ironic hashtags, mentions, and URLs).
Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library.
Results. Table TABREF9 shows the results obtained when using train-test configurations for each language. For English, our results, in terms of macro F-score ($F$), were not comparable to those of BIBREF15, BIBREF33, as we used 11% of the original dataset. For French, our scores are in line with those reported in state of the art (cf. best system in the irony shared task achieved $F=78.3$ BIBREF3). They outperform those obtained for Arabic ($A=71.7$) BIBREF13 and are comparable to those recently reported in the irony detection shared task in Arabic tweets BIBREF14, BIBREF34 ($F=84.4$). Overall, the results show that semantic-based information captured by the embedding space are more productive comparing to standard surface and lexicon-based features.
Cross-lingual Irony Detection
We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries BIBREF35 or unsupervised methods relying on monolingual corpora BIBREF36, BIBREF37, BIBREF38. For our experiments, we use Conneau et al 's approach as it showed superior results with respect to the literature BIBREF36. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table TABREF11 presents the results.
From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the $Ar\longrightarrow En$ configuration. It is worthy to mention that the highest result we get in this experiment is from the En$\rightarrow $Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\rightarrow $Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\rightarrow $(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets.
Discussions and Conclusion
This paper proposes the first multilingual ID in tweets. We show that simple monolingual architectures (either neural or feature-based) trained separately on each language can be successfully used in a multilingual setting providing a cross-lingual word representation or basic surface features. Our monolingual results are comparable to state of the art for the three languages. The CNN architecture trained on cross-lingual word representation shows that irony has a certain similarity between the languages we targeted despite the cultural differences which confirm that irony is a universal phenomena, as already shown in previous linguistic studies BIBREF39, BIBREF25, BIBREF40. The manual analysis of the common misclassified tweets across the languages in the multilingual setup, shows that classification errors are due to three main factors. (1) First, the absence of context where writers did not provide sufficient information to capture the ironic sense even in the monolingual setting, as in نبدا تاني يسقط يسقط حسني مبارك !! > (Let's start again, get off get off Mubarak!!) where the writer mocks the Egyptian revolution, as the actual president "Sisi" is viewed as Mubarak's fellows. (2) Second, the presence of out of vocabulary (OOV) terms because of the weak coverage of the mutlilingual embeddings which make the system fails to generalize when the OOV set of unseen words is large during the training process. We found tweets in all the three languages written in a very informal way, where some characters of the words were deleted, duplicated or written phonetically (e.g phat instead of fat). (3) Another important issue is the difficulty to deal with the Arabic language. Arabic tweets are often characterized by non-diacritised texts, a large variations of unstandardized dialectal Arabic (recall that our dataset has 4 main varieties, namely Egypt, Gulf, Levantine, and Maghrebi), presence of transliterated words (e.g. the word table becomes طابلة> (tabla)), and finally linguistic code switching between Modern Standard Arabic and several dialects, and between Arabic and other languages like English and French. We found some tweets contain only words from one of the varieties and most of these words do not exist in the Arabic embeddings model. For example in مبارك بقاله كام يوم مامتش .. هو عيان ولاه ايه #مصر > (Since many days Mubarak didn't die .. is he sick or what? #Egypt), only the words يوم> (day), مبارك> (Mubarak), and هو> (he) exist in the embeddings. Clearly, considering only these three available words, we are not able to understand the context or the ironic meaning of the tweet. To conclude, our multilingual experiments confirmed that the door is open towards multilingual approaches for ID. Furthermore, our results showed that ID can be applied to languages that lack of annotated data. Our next step is to experiment with other languages such as Hindi and Italian.
Acknowledgment
The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE (PGC2018-096212-B-C31). | Convolutional Neural Network (CNN) |
c9ee70c481c801892556eb6b9fd8ee38197923be | c9ee70c481c801892556eb6b9fd8ee38197923be_0 | Q: What text-based features are used?
Text: Motivations
Figurative language makes use of figures of speech to convey non-literal meaning BIBREF0, BIBREF1. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm.
Irony detection (ID) has gained relevance recently, due to its importance to extract information from texts. For example, to go beyond the literal matches of user queries, Veale enriched information retrieval with new operators to enable the non-literal retrieval of creative expressions BIBREF2. Also, the performances of sentiment analysis systems drastically decrease when applied to ironic texts BIBREF3, BIBREF4. Most related work concern English BIBREF5, BIBREF6 with some efforts in French BIBREF7, Portuguese BIBREF8, Italian BIBREF9, Dutch BIBREF10, Hindi BIBREF11, Spanish variants BIBREF12 and Arabic BIBREF13, BIBREF14. Bilingual ID with one model per language has also been explored, like English-Czech BIBREF15 and English-Chinese BIBREF16, but not within a cross-lingual perspective.
In social media, such as Twitter, specific hashtags (#irony, #sarcasm) are often used as gold labels to detect irony in a supervised learning setting. Although recent studies pointed out the issue of false-alarm hashtags in self-labeled data BIBREF17, ID via hashtag filtering provides researchers positive examples with high precision. On the other hand, systems are not able to detect irony in languages where such filtering is not always possible. Multilingual prediction (either relying on machine translation or multilingual embedding methods) is a common solution to tackle under-resourced languages BIBREF18, BIBREF19. While multilinguality has been widely investigated in information retrieval BIBREF20, BIBREF21 and several NLP tasks (e.g., sentiment analysis BIBREF22, BIBREF23 and named entity recognition BIBREF24), no one explored it for irony.
We aim here to bridge the gap by tackling ID in tweets from both multilingual (French, English and Arabic) and multicultural perspectives (Indo-European languages whose speakers share quite the same cultural background vs. less culturally close languages). Our approach does not rely either on machine translation or parallel corpora (which are not always available), but rather builds on previous corpus-based studies that show that irony is a universal phenomenon and many languages share similar irony devices. For example, Karoui et. al BIBREF25 concluded that their multi-layer annotated schema, initially used to annotate French tweets, is portable to English and Italian, observing relatively the same tendencies in terms of irony categories and markers. Similarly, Chakhachiro BIBREF26 studies irony in English and Arabic, and shows that both languages share several similarities in the rhetorical (e.g., overstatement), grammatical (e.g., redundancy) and lexical (e.g., synonymy) usage of irony devices. The next step now is to show to what extent these observations are still valid from a computational point of view. Our contributions are:
A new freely available corpus of Arabic tweets manually annotated for irony detection.
Monolingual ID: We propose both feature-based models (relying on language-dependent and language-independent features) and neural models to measure to what extent ID is language dependent.
Cross-lingual ID: We experiment using cross-lingual word representation by training on one language and testing on another one to measure how the proposed models are culture-dependent. Our results are encouraging and open the door to ID in languages that lack of annotated data for irony.
Data
Arabic dataset (Ar=$11,225$ tweets). Our starting point was the corpus built by BIBREF13 that we extended to different political issues and events related to the Middle East and Maghreb that hold during the years 2011 to 2018. Tweets were collected using a set of predefined keywords (which targeted specific political figures or events) and containing or not Arabic ironic hashtags (سخرية>#, مسخرة>#, تهكم>#, استهزاء>#) . The collection process resulted in a set of $6,809$ ironic tweets ($I$) vs. $15,509$ non ironic ($NI$) written using standard (formal) and different Arabic language varieties: Egypt, Gulf, Levantine, and Maghrebi dialects.
To investigate the validity of using the original tweets labels, a sample of $3,000$ $I$ and $3,000$ $NI$ was manually annotated by two Arabic native speakers which resulted in $2,636$ $I$ vs. $2,876$ $NI$. The inter-annotator agreement using Cohen's Kappa was $0.76$, while the agreement score between the annotators' labels and the original labels was $0.6$. Agreements being relatively good knowing the difficulty of the task, we sampled $5,713$ instances from the original unlabeled dataset to our manually labeled part. The added tweets have been manually checked to remove duplicates, very short tweets and tweets that depend on external links, images or videos to understand their meaning.
French dataset (Fr=$7,307$ tweets). We rely on the corpus used for the DEFT 2017 French shared task on irony BIBREF3 which consists of tweets relative to a set of topics discussed in the media between 2014 and 2016 and contains topic keywords and/or French irony hashtags (#ironie, #sarcasme). Tweets have been annotated by three annotators (after removing the original labels) with a reported Cohen's Kappa of $0.69$.
English dataset (En=$11,225$ tweets). We use the corpus built by BIBREF15 which consists of $100,000$ tweets collected using the hashtag #sarcasm. It was used as benchmark in several works BIBREF27, BIBREF28. We sliced a subset of approximately $11,200$ tweets to match the sizes of the other languages' datasets.
Table TABREF6 shows the tweet distribution in all corpora. Across the three languages, we keep a similar number of instances for train and test sets to have fair cross-lingual experiments as well (see Section SECREF4). Also, for French, we use the original dataset without any modification, keeping the same number of records for train and test to better compare with state-of-the-art results. For the classes distribution (ironic vs. non ironic), we do not choose a specific ratio but we use the resulted distribution from the random shuffling process.
Monolingual Irony Detection
It is important to note that our aim is not to outperform state-of-the-art models in monolingual ID but to investigate which of the monolingual architectures (neural or feature-based) can achieve comparable results with existing systems. The result can show which kind of features works better in the monolingual settings and can be employed to detect irony in a multilingual setting. In addition, it can show us to what extend ID is language dependent by comparing their results to multilingual results. Two models have been built, as explained below. Prior to learning, basic preprocessing steps were performed for each language (e.g., removing foreign characters, ironic hashtags, mentions, and URLs).
Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library.
Results. Table TABREF9 shows the results obtained when using train-test configurations for each language. For English, our results, in terms of macro F-score ($F$), were not comparable to those of BIBREF15, BIBREF33, as we used 11% of the original dataset. For French, our scores are in line with those reported in state of the art (cf. best system in the irony shared task achieved $F=78.3$ BIBREF3). They outperform those obtained for Arabic ($A=71.7$) BIBREF13 and are comparable to those recently reported in the irony detection shared task in Arabic tweets BIBREF14, BIBREF34 ($F=84.4$). Overall, the results show that semantic-based information captured by the embedding space are more productive comparing to standard surface and lexicon-based features.
Cross-lingual Irony Detection
We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries BIBREF35 or unsupervised methods relying on monolingual corpora BIBREF36, BIBREF37, BIBREF38. For our experiments, we use Conneau et al 's approach as it showed superior results with respect to the literature BIBREF36. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table TABREF11 presents the results.
From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the $Ar\longrightarrow En$ configuration. It is worthy to mention that the highest result we get in this experiment is from the En$\rightarrow $Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\rightarrow $Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\rightarrow $(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets.
Discussions and Conclusion
This paper proposes the first multilingual ID in tweets. We show that simple monolingual architectures (either neural or feature-based) trained separately on each language can be successfully used in a multilingual setting providing a cross-lingual word representation or basic surface features. Our monolingual results are comparable to state of the art for the three languages. The CNN architecture trained on cross-lingual word representation shows that irony has a certain similarity between the languages we targeted despite the cultural differences which confirm that irony is a universal phenomena, as already shown in previous linguistic studies BIBREF39, BIBREF25, BIBREF40. The manual analysis of the common misclassified tweets across the languages in the multilingual setup, shows that classification errors are due to three main factors. (1) First, the absence of context where writers did not provide sufficient information to capture the ironic sense even in the monolingual setting, as in نبدا تاني يسقط يسقط حسني مبارك !! > (Let's start again, get off get off Mubarak!!) where the writer mocks the Egyptian revolution, as the actual president "Sisi" is viewed as Mubarak's fellows. (2) Second, the presence of out of vocabulary (OOV) terms because of the weak coverage of the mutlilingual embeddings which make the system fails to generalize when the OOV set of unseen words is large during the training process. We found tweets in all the three languages written in a very informal way, where some characters of the words were deleted, duplicated or written phonetically (e.g phat instead of fat). (3) Another important issue is the difficulty to deal with the Arabic language. Arabic tweets are often characterized by non-diacritised texts, a large variations of unstandardized dialectal Arabic (recall that our dataset has 4 main varieties, namely Egypt, Gulf, Levantine, and Maghrebi), presence of transliterated words (e.g. the word table becomes طابلة> (tabla)), and finally linguistic code switching between Modern Standard Arabic and several dialects, and between Arabic and other languages like English and French. We found some tweets contain only words from one of the varieties and most of these words do not exist in the Arabic embeddings model. For example in مبارك بقاله كام يوم مامتش .. هو عيان ولاه ايه #مصر > (Since many days Mubarak didn't die .. is he sick or what? #Egypt), only the words يوم> (day), مبارك> (Mubarak), and هو> (he) exist in the embeddings. Clearly, considering only these three available words, we are not able to understand the context or the ironic meaning of the tweet. To conclude, our multilingual experiments confirmed that the door is open towards multilingual approaches for ID. Furthermore, our results showed that ID can be applied to languages that lack of annotated data. Our next step is to experiment with other languages such as Hindi and Italian.
Acknowledgment
The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE (PGC2018-096212-B-C31). | language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities), language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words) |
a24a7a460fd5e60d71a7e787401c68caa4702df6 | a24a7a460fd5e60d71a7e787401c68caa4702df6_0 | Q: What monolingual word representations are used?
Text: Motivations
Figurative language makes use of figures of speech to convey non-literal meaning BIBREF0, BIBREF1. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm.
Irony detection (ID) has gained relevance recently, due to its importance to extract information from texts. For example, to go beyond the literal matches of user queries, Veale enriched information retrieval with new operators to enable the non-literal retrieval of creative expressions BIBREF2. Also, the performances of sentiment analysis systems drastically decrease when applied to ironic texts BIBREF3, BIBREF4. Most related work concern English BIBREF5, BIBREF6 with some efforts in French BIBREF7, Portuguese BIBREF8, Italian BIBREF9, Dutch BIBREF10, Hindi BIBREF11, Spanish variants BIBREF12 and Arabic BIBREF13, BIBREF14. Bilingual ID with one model per language has also been explored, like English-Czech BIBREF15 and English-Chinese BIBREF16, but not within a cross-lingual perspective.
In social media, such as Twitter, specific hashtags (#irony, #sarcasm) are often used as gold labels to detect irony in a supervised learning setting. Although recent studies pointed out the issue of false-alarm hashtags in self-labeled data BIBREF17, ID via hashtag filtering provides researchers positive examples with high precision. On the other hand, systems are not able to detect irony in languages where such filtering is not always possible. Multilingual prediction (either relying on machine translation or multilingual embedding methods) is a common solution to tackle under-resourced languages BIBREF18, BIBREF19. While multilinguality has been widely investigated in information retrieval BIBREF20, BIBREF21 and several NLP tasks (e.g., sentiment analysis BIBREF22, BIBREF23 and named entity recognition BIBREF24), no one explored it for irony.
We aim here to bridge the gap by tackling ID in tweets from both multilingual (French, English and Arabic) and multicultural perspectives (Indo-European languages whose speakers share quite the same cultural background vs. less culturally close languages). Our approach does not rely either on machine translation or parallel corpora (which are not always available), but rather builds on previous corpus-based studies that show that irony is a universal phenomenon and many languages share similar irony devices. For example, Karoui et. al BIBREF25 concluded that their multi-layer annotated schema, initially used to annotate French tweets, is portable to English and Italian, observing relatively the same tendencies in terms of irony categories and markers. Similarly, Chakhachiro BIBREF26 studies irony in English and Arabic, and shows that both languages share several similarities in the rhetorical (e.g., overstatement), grammatical (e.g., redundancy) and lexical (e.g., synonymy) usage of irony devices. The next step now is to show to what extent these observations are still valid from a computational point of view. Our contributions are:
A new freely available corpus of Arabic tweets manually annotated for irony detection.
Monolingual ID: We propose both feature-based models (relying on language-dependent and language-independent features) and neural models to measure to what extent ID is language dependent.
Cross-lingual ID: We experiment using cross-lingual word representation by training on one language and testing on another one to measure how the proposed models are culture-dependent. Our results are encouraging and open the door to ID in languages that lack of annotated data for irony.
Data
Arabic dataset (Ar=$11,225$ tweets). Our starting point was the corpus built by BIBREF13 that we extended to different political issues and events related to the Middle East and Maghreb that hold during the years 2011 to 2018. Tweets were collected using a set of predefined keywords (which targeted specific political figures or events) and containing or not Arabic ironic hashtags (سخرية>#, مسخرة>#, تهكم>#, استهزاء>#) . The collection process resulted in a set of $6,809$ ironic tweets ($I$) vs. $15,509$ non ironic ($NI$) written using standard (formal) and different Arabic language varieties: Egypt, Gulf, Levantine, and Maghrebi dialects.
To investigate the validity of using the original tweets labels, a sample of $3,000$ $I$ and $3,000$ $NI$ was manually annotated by two Arabic native speakers which resulted in $2,636$ $I$ vs. $2,876$ $NI$. The inter-annotator agreement using Cohen's Kappa was $0.76$, while the agreement score between the annotators' labels and the original labels was $0.6$. Agreements being relatively good knowing the difficulty of the task, we sampled $5,713$ instances from the original unlabeled dataset to our manually labeled part. The added tweets have been manually checked to remove duplicates, very short tweets and tweets that depend on external links, images or videos to understand their meaning.
French dataset (Fr=$7,307$ tweets). We rely on the corpus used for the DEFT 2017 French shared task on irony BIBREF3 which consists of tweets relative to a set of topics discussed in the media between 2014 and 2016 and contains topic keywords and/or French irony hashtags (#ironie, #sarcasme). Tweets have been annotated by three annotators (after removing the original labels) with a reported Cohen's Kappa of $0.69$.
English dataset (En=$11,225$ tweets). We use the corpus built by BIBREF15 which consists of $100,000$ tweets collected using the hashtag #sarcasm. It was used as benchmark in several works BIBREF27, BIBREF28. We sliced a subset of approximately $11,200$ tweets to match the sizes of the other languages' datasets.
Table TABREF6 shows the tweet distribution in all corpora. Across the three languages, we keep a similar number of instances for train and test sets to have fair cross-lingual experiments as well (see Section SECREF4). Also, for French, we use the original dataset without any modification, keeping the same number of records for train and test to better compare with state-of-the-art results. For the classes distribution (ironic vs. non ironic), we do not choose a specific ratio but we use the resulted distribution from the random shuffling process.
Monolingual Irony Detection
It is important to note that our aim is not to outperform state-of-the-art models in monolingual ID but to investigate which of the monolingual architectures (neural or feature-based) can achieve comparable results with existing systems. The result can show which kind of features works better in the monolingual settings and can be employed to detect irony in a multilingual setting. In addition, it can show us to what extend ID is language dependent by comparing their results to multilingual results. Two models have been built, as explained below. Prior to learning, basic preprocessing steps were performed for each language (e.g., removing foreign characters, ironic hashtags, mentions, and URLs).
Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library.
Results. Table TABREF9 shows the results obtained when using train-test configurations for each language. For English, our results, in terms of macro F-score ($F$), were not comparable to those of BIBREF15, BIBREF33, as we used 11% of the original dataset. For French, our scores are in line with those reported in state of the art (cf. best system in the irony shared task achieved $F=78.3$ BIBREF3). They outperform those obtained for Arabic ($A=71.7$) BIBREF13 and are comparable to those recently reported in the irony detection shared task in Arabic tweets BIBREF14, BIBREF34 ($F=84.4$). Overall, the results show that semantic-based information captured by the embedding space are more productive comparing to standard surface and lexicon-based features.
Cross-lingual Irony Detection
We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) to verify which pair of the three languages: (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages BIBREF35, we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries BIBREF35 or unsupervised methods relying on monolingual corpora BIBREF36, BIBREF37, BIBREF38. For our experiments, we use Conneau et al 's approach as it showed superior results with respect to the literature BIBREF36. We perform several experiments by training on one language ($lang_1$) and testing on another one ($lang_2$) (henceforth $lang_1\rightarrow lang_2$). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. Table TABREF11 presents the results.
From a semantic perspective, despite the language and cultural differences between Arabic and French languages, CNN results show a high performance comparing to the other languages pairs when we train on each of these two languages and test on the other one. Similarly, for the French and English pair, but when we train on French they are quite lower. We have a similar case when we train on Arabic and test on English. We can justify that by, the language presentation of the Arabic and French tweets are quite informal and have many dialect words that may not exist in the pretrained embeddings we used comparing to the English ones (lower embeddings coverage ratio), which become harder for the CNN to learn a clear semantic pattern. Another point is the presence of Arabic dialects, where some dialect words may not exist in the multilingual pretrained embedding model that we used. On the other hand, from the text-based perspective, the results show that the text-based features can help in the case when the semantic aspect shows weak detection; this is the case for the $Ar\longrightarrow En$ configuration. It is worthy to mention that the highest result we get in this experiment is from the En$\rightarrow $Fr pair, as both languages use Latin characters. Finally, when investigating the relatedness between European vs. non European languages (cf. (En/Fr)$\rightarrow $Ar), we obtain similar results than those obtained in the monolingual experiment (macro F-score 62.4 vs. 68.0) and best results are achieved by Ar $\rightarrow $(En/Fr). This shows that there are pragmatic devices in common between both sides and, in a similar way, similar text-based patterns in the narrative way of the ironic tweets.
Discussions and Conclusion
This paper proposes the first multilingual ID in tweets. We show that simple monolingual architectures (either neural or feature-based) trained separately on each language can be successfully used in a multilingual setting providing a cross-lingual word representation or basic surface features. Our monolingual results are comparable to state of the art for the three languages. The CNN architecture trained on cross-lingual word representation shows that irony has a certain similarity between the languages we targeted despite the cultural differences which confirm that irony is a universal phenomena, as already shown in previous linguistic studies BIBREF39, BIBREF25, BIBREF40. The manual analysis of the common misclassified tweets across the languages in the multilingual setup, shows that classification errors are due to three main factors. (1) First, the absence of context where writers did not provide sufficient information to capture the ironic sense even in the monolingual setting, as in نبدا تاني يسقط يسقط حسني مبارك !! > (Let's start again, get off get off Mubarak!!) where the writer mocks the Egyptian revolution, as the actual president "Sisi" is viewed as Mubarak's fellows. (2) Second, the presence of out of vocabulary (OOV) terms because of the weak coverage of the mutlilingual embeddings which make the system fails to generalize when the OOV set of unseen words is large during the training process. We found tweets in all the three languages written in a very informal way, where some characters of the words were deleted, duplicated or written phonetically (e.g phat instead of fat). (3) Another important issue is the difficulty to deal with the Arabic language. Arabic tweets are often characterized by non-diacritised texts, a large variations of unstandardized dialectal Arabic (recall that our dataset has 4 main varieties, namely Egypt, Gulf, Levantine, and Maghrebi), presence of transliterated words (e.g. the word table becomes طابلة> (tabla)), and finally linguistic code switching between Modern Standard Arabic and several dialects, and between Arabic and other languages like English and French. We found some tweets contain only words from one of the varieties and most of these words do not exist in the Arabic embeddings model. For example in مبارك بقاله كام يوم مامتش .. هو عيان ولاه ايه #مصر > (Since many days Mubarak didn't die .. is he sick or what? #Egypt), only the words يوم> (day), مبارك> (Mubarak), and هو> (he) exist in the embeddings. Clearly, considering only these three available words, we are not able to understand the context or the ironic meaning of the tweet. To conclude, our multilingual experiments confirmed that the door is open towards multilingual approaches for ID. Furthermore, our results showed that ID can be applied to languages that lack of annotated data. Our next step is to experiment with other languages such as Hindi and Italian.
Acknowledgment
The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE (PGC2018-096212-B-C31). | AraVec for Arabic, FastText for French, and Word2vec Google News for English. |
5758ebff49807a51d080b0ce10ba3f86dcf71925 | 5758ebff49807a51d080b0ce10ba3f86dcf71925_0 | Q: What do they constrain using integer linear programming?
Text: Introduction
Summarization is a promising technique for reducing information overload. It aims at converting long text documents to short, concise summaries conveying the essential content of the source documents BIBREF0 . Extractive methods focus on selecting important sentences from the source and concatenating them to form a summary, whereas abstractive methods can involve a number of high-level text operations such as word reordering, paraphrasing, and generalization BIBREF1 . To date, summarization has been successfully exploited for a number of text domains, including news articles BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , product reviews BIBREF6 , online forum threads BIBREF7 , meeting transcripts BIBREF8 , scientific articles BIBREF9 , BIBREF10 , student course responses BIBREF11 , BIBREF12 , and many others.
Summarizing content contributed by multiple authors is particularly challenging. This is partly because people tend to use different expressions to convey the same semantic meaning. In a recent study of summarizing student responses to post-class reflective questions, Luo et al., Luo:2016:NAACL observe that the students use distinct lexical items such as “bike elements” and “bicycle parts” to refer to the same concept. The student responses frequently contain expressions with little or no word overlap, such as “the main topics of this course” and “what we will learn in this class,” when they are prompted with “describe what you found most interesting in today's class.” A similar phenomenon has also been observed in the news domain, where reporters use different nicknames, e.g., “Bronx Zoo” and “New York Highlanders,” to refer to the baseball team “New York Yankees.” Luo et al., Luo:2016:NAACL report that about 80% of the document bigrams occur only once or twice for the news domain, whereas the ratio is 97% for student responses, suggesting the latter domain has a higher level of lexical diversity. When source documents contain diverse expressions conveying the same meaning, it can hinder the summarization system's ability to effectively identify salient content from the source documents. It can also increase the summary redundancy if lexically-distinct but semantically-similar expressions are included in the summary.
Existing neural encoder-decoder models may not work well at summarizing such content with high lexical variety BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . On one hand, training the neural sequence-to-sequence models requires a large amount of parallel data. The cost of annotating gold-standard summaries for many domains such as student responses can be prohibitive. Without sufficient labelled data, the models can only be trained on automatically gathered corpora, where an instance often includes a news article paired with its title or a few highlights. On the other hand, the summaries produced by existing neural encoder-decoder models are far from perfect. The summaries are mostly extractive with minor edits BIBREF16 , contain repetitive words and phrases BIBREF17 and may not accurately reproduce factual details BIBREF18 , BIBREF19 . We examine the performance of a state-of-the-art neural summarization model in Section § SECREF28 .
In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occurrence matrix, and further evaluate the approach on a broad range of datasets exhibiting high lexical diversity. The ILP framework, being extractive in nature, has demonstrated considerable success on a number of summarization tasks BIBREF20 , BIBREF21 . It generates a summary by selecting a set of sentences from the source documents. The sentences shall maximize the coverage of important source content, while minimizing the redundancy among themselves. At the heart of the algorithm is a sentence-concept co-occurrence matrix, used to determine if a sentence contains important concepts and whether two sentences share the same concepts. We introduce a low-rank approximation to the co-occurrence matrix and optimize it using the proximal gradient method. The resulting system thus allows different sentences to share co-occurrence statistics. For example, “The activity with the bicycle parts" will be allowed to partially contain “bike elements" although the latter phrase does not appear in the sentence. The low-rank matrix approximation provides an effective way to implicitly group lexically-diverse but semantically-similar expressions. It can handle out-of-vocabulary expressions and domain-specific terminologies well, hence being a more principled approach than heuristically calculating similarities of word embeddings.
Our research contributions of this work include the following.
In the following sections we first present a thorough review of the related work (§ SECREF2 ), then introduce our ILP summarization framework (§ SECREF3 ) with a low-rank approximation of the co-occurrence matrix optimized using the proximal gradient method (§ SECREF4 ). Experiments are performed on a collection of eight datasets (§ SECREF5 ) containing student responses to post-class reflective questions, product reviews, peer reviews, and news articles. Intrinsic evaluation (§ SECREF20 ) shows that the low-rank approximation algorithm can effectively group distinct expressions used in similar semantic context. For extrinsic evaluation (§ SECREF28 ) our proposed framework obtains competitive results in comparison to state-of-the-art summarization systems. Finally, we conduct comprehensive studies analyzing the characteristics of the datasets and suggest critical factors that affect the summarization performance (§ SECREF7 ).
Related Work
Extractive summarization has undergone great development over the past decades. It focuses on extracting relevant sentences from a single document or a cluster of documents related to a particular topic. Various techniques have been explored, including maximal marginal relevance BIBREF22 , submodularity BIBREF23 , integer linear programming BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF4 , minimizing reconstruction error BIBREF28 , graph-based models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , determinantal point processes BIBREF33 , neural networks and reinforcement learning BIBREF34 , BIBREF35 among others. Nonetheless, most studies are bound to a single dataset and few approaches have been evaluated in a cross-domain setting. In this paper, we propose an enhanced ILP framework and evaluate it on a broad range of datasets. We present an in-depth analysis of the dataset characteristics derived from both source documents and reference summaries to understand how domain-specific factors may affect the applicability of the proposed approach.
Neural summarization has seen promising improvements in recent years with encoder-decoder models BIBREF13 , BIBREF14 . The encoder condenses the source text to a dense vector, whereas the decoder unrolls the vector to a summary sequence by predicting one word at a time. A number of studies have been proposed to deal with out-of-vocabulary words BIBREF16 , improve the attention mechanism BIBREF36 , BIBREF37 , BIBREF38 , avoid generating repetitive words BIBREF16 , BIBREF17 , adjust summary length BIBREF39 , encode long text BIBREF40 , BIBREF41 and improve the training objective BIBREF42 , BIBREF15 , BIBREF43 . To date, these studies focus primarily on single-document summarization and headline generation. This is partly because training neural encoder-decoder models requires a large amount of parallel data, yet the cost of annotating gold-standard summaries for most domains can be prohibitive. We validate the effectiveness of a state-of-the-art neural summarization system BIBREF16 on our collection of datasets and report results in § SECREF28 .
In this paper we focus on the integer linear programming-based summarization framework and propose enhancements to it to summarize text content with high lexical diversity. The ILP framework is shown to perform strongly on extractive summarization BIBREF20 , BIBREF44 , BIBREF21 . It produces an optimal selection of sentences that (i) maximize the coverage of important concepts discussed in the source, (ii) minimize the redundancy in pairs of selected sentences, and (iii) ensure the summary length does not exceed a limit. Previous work has largely focused on improving the estimation of concept weights in the ILP framework BIBREF45 , BIBREF46 , BIBREF47 , BIBREF48 , BIBREF4 . However, distinct lexical items such as “bike elements” and “bicycle parts” are treated as different concepts and their weights are not shared. In this paper we overcome this issue by proposing a low-rank approximation to the sentence-concept co-occurrence matrix to intrinsically group lexically-distinct but semantically-similar expressions; they are considered as a whole when maximizing concept coverage and minimizing redundancy.
Our work is also different from the traditional approaches using dimensionality reduction techniques such as non-negative matrix factorization (NNMF) and latent semantic analysis (LSA) for summarization BIBREF49 , BIBREF50 , BIBREF51 , BIBREF52 , BIBREF53 . In particular, Wang et al. wang2008multi use NNMF to group sentences into clusters; Conroy et al. conroy-EtAl:2013:MultiLing explore NNMF and LSA to obtain better estimates of term weights; Wang et al. wang2016low use low-rank approximation to cast sentences and images to the same embedding space. Different from the above methods, our proposed framework focuses on obtaining a low-rank approximation of the co-occurrence matrix embedded in the ILP framework, so that diverse expressions can share co-occurrence frequencies. Note that out-of-vocabulary expressions and domain-specific terminologies are abundant in our datasets, therefore simply calculating the lexical overlap BIBREF54 or cosine similarity of word embeddings BIBREF55 cannot serve our goal well.
This manuscript extends our previous work on summarizing student course responses BIBREF11 , BIBREF56 , BIBREF12 submitted after each lecture via a mobile app named CourseMIRROR BIBREF57 , BIBREF58 , BIBREF59 . The students are asked to respond to reflective prompts such as “describe what you found most interesting in today's class” and “describe what was confusing or needed more detail.” For large classes with hundreds of students, it can be quite difficult for instructors to manually analyze the student responses, hence the help of automatic summarization. Our extensions of this work are along three dimensions: (i) we crack the “black-box” of the low-rank approximation algorithm to understand if it indeed allows lexically-diverse but semantically-similar items to share co-occurrence statistics; (ii) we compare the ILP-based summarization framework with state-of-the-art baselines, including a popular neural encoder-decoder model for summarization; (iii) we expand the student feedback datasets to include responses collected from materials science and engineering, statistics for industrial engineers, and data structures. We additionally experiment with reviews and news articles. Analyzing the unique characteristics of each dataset allows us to identify crucial factors influencing the summarization performance.
With the fast development of Massive Open Online Courses (MOOC) platforms, more attention is being dedicated to analyzing educationally-oriented language data. These studies seek to identify student leaders from MOOC discussion forums BIBREF60 , perform sentiment analysis on student discussions BIBREF61 , improve student engagement and reducing student retention BIBREF62 , BIBREF63 , and using language generation techniques to automatically generate feedback to students BIBREF64 . Our focus of this paper is to automatically summarizing student responses so that instructors can collect feedback in a timely manner. We expect the developed summarization techniques and result analysis will further summarization research in similar text genres exhibiting high lexical variety.
ILP Formulation
Let INLINEFORM0 be a set of documents that consist of INLINEFORM1 sentences in total. Let INLINEFORM2 , INLINEFORM3 indicate if a sentence INLINEFORM4 is selected ( INLINEFORM5 ) or not ( INLINEFORM6 ) in the summary. Similarly, let INLINEFORM7 be the number of unique concepts in INLINEFORM8 . INLINEFORM9 , INLINEFORM10 indicate the appearance of concepts in the summary. Each concept INLINEFORM11 is assigned a weight of INLINEFORM12 , often measured by the number of sentences or documents that contain the concept. The ILP-based summarization approach BIBREF20 searches for an optimal assignment to the sentence and concept variables so that the selected summary sentences maximize coverage of important concepts. The relationship between concepts and sentences is captured by a co-occurrence matrix INLINEFORM13 , where INLINEFORM14 indicates the INLINEFORM15 -th concept appears in the INLINEFORM16 -th sentence, and INLINEFORM17 otherwise. In the literature, bigrams are frequently used as a surrogate for concepts BIBREF24 , BIBREF21 . We follow the convention and use `concept' and `bigram' interchangeably in this paper.
Two sets of linear constraints are specified to ensure the ILP validity: (1) a concept is selected if and only if at least one sentence carrying it has been selected (Eq. ), and (2) all concepts in a sentence will be selected if that sentence is selected (Eq. ). Finally, the selected summary sentences are allowed to contain a total of INLINEFORM0 words or less (Eq. ). DISPLAYFORM0
The above ILP can be transformed to matrix representation: DISPLAYFORM0
We use boldface letters to represent vectors and matrices. INLINEFORM0 is an auxiliary matrix created by horizontally stacking the concept vector INLINEFORM1 INLINEFORM2 times. Constraint set (Eq. ) specifies that a sentence is selected indicates that all concepts it carries have been selected. It corresponds to INLINEFORM3 constraints of the form INLINEFORM4 , where INLINEFORM5 .
As far as we know, this is the first-of-its-kind matrix representation of the ILP framework. It clearly shows the two important components of this framework, including 1) the concept-sentence co-occurrence matrix INLINEFORM0 , and 2) concept weight vector INLINEFORM1 . Existing work focus mainly on generating better estimates of concept weights ( INLINEFORM2 ), while we focus on improving the co-occurrence matrix INLINEFORM3 .
Our Approach
Because of the lexical diversity problem, we suspect the co-occurrence matrix INLINEFORM0 may not establish a faithful correspondence between sentences and concepts. A concept may be conveyed using multiple bigram expressions; however, the current co-occurrence matrix only captures a binary relationship between sentences and bigrams. For example, we ought to give partial credit to “bicycle parts” given that a similar expression “bike elements” appears in the sentence. Domain-specific synonyms may be captured as well. For example, the sentence “I tried to follow along but I couldn't grasp the concepts” is expected to partially contain the concept “understand the”, although the latter did not appear in the sentence.
The existing matrix INLINEFORM0 is highly sparse. Only 3.7% of the entries are non-zero in the student response data sets on average (§ SECREF5 ). We therefore propose to impute the co-occurrence matrix by filling in missing values (i.e., matrix completion). This is accomplished by approximating the original co-occurrence matrix using a low-rank matrix. The low-rankness encourages similar concepts to be shared across sentences.
The ILP with a low-rank approximation of the co-occurrence matrix can be formalized as follows. DISPLAYFORM0
The low-rank approximation process makes two notable changes to the existing ILP framework.
Concretely, given the co-occurrence matrix INLINEFORM0 , we aim to find a low-rank matrix INLINEFORM1 whose values are close to INLINEFORM2 at the observed positions. Our objective function is DISPLAYFORM0
where INLINEFORM0 represents the set of observed value positions. INLINEFORM1 denotes the trace norm of INLINEFORM2 , i.e., INLINEFORM3 , where INLINEFORM4 is the rank of INLINEFORM5 and INLINEFORM6 are the singular values. By defining the following projection operator INLINEFORM7 , DISPLAYFORM0
our objective function (Eq. EQREF10 ) can be succinctly represented as DISPLAYFORM0
where INLINEFORM0 denotes the Frobenius norm.
Following Mazumder et al. Mazumder:2010, we optimize Eq. EQREF12 using the proximal gradient descent algorithm. The update rule is DISPLAYFORM0
where INLINEFORM0 is the step size at iteration k and the proximal function INLINEFORM1 is defined as the singular value soft-thresholding operator, INLINEFORM2 , where INLINEFORM3 is the singular value decomposition (SVD) of INLINEFORM4 and INLINEFORM5 .
Since the gradient of INLINEFORM0 is Lipschitz continuous with INLINEFORM1 ( INLINEFORM2 is the Lipschitz continuous constant), we follow Mazumder et al. Mazumder:2010 to choose fixed step size INLINEFORM3 , which has a provable convergence rate of INLINEFORM4 , where INLINEFORM5 is the number of iterations.
Datasets
To demonstrate the generality of the proposed approach, we consider three distinct types of corpora, ranging from student response data sets from four different courses to three sets of reviews to one benchmark of news articles. The corpora are summarized in Table TABREF14 .
Student responses. Research has explored using reflection prompts/muddy cards/one-minute papers to promote and collect reflections from students BIBREF65 , BIBREF66 , BIBREF67 . However, it is expensive and time consuming for humans to summarize such feedback. It is therefore desirable to automatically summarize the student feedback produced in online and offline environments, although it is only recently that a data collection effort to support such research has been initiated BIBREF58 , BIBREF57 . In our data, one particular type of student response is considered, named “reflective feedback” BIBREF68 , which has been shown to enhance interaction between instructors and students by educational researchers BIBREF69 , BIBREF70 . More specifically, students are presented with the following prompts after each lecture and asked to provide responses: 1) “describe what you found most interesting in today's class,” 2) “describe what was confusing or needed more detail,” and 3) “describe what you learned about how you learn.” These open-ended prompts are carefully designed to encourage students to self-reflect, allowing them to “recapture experience, think about it and evaluate it" BIBREF68 .
To test generality, we gathered student responses from four different courses, as shown in Table TABREF14 . The first one was collected by Menekse et al. Menekse:2011 using paper-based surveys from an introductory materials science and engineering class (henceforth Eng) taught in a major U.S. university, and a subset is made public by us BIBREF11 , available at the link: http://www.coursemirror.com/download/dataset. The remaining three courses are collected by us using a mobile application, CourseMIRROR BIBREF57 , BIBREF58 and then the reference summaries for each course are created by human annotators with the proper background. The human annotators are allowed to create abstract summaries using their own words in addition to selecting phrases directly from the responses. While the 2nd and 3rd data sets are from the same course, Statistics for Industrial Engineers, they were taught in 2015 and 2016 respectively (henceforth Stat2015 and Stat2016), at the Boǧaziçi University in Turkey. The course was taught in English while the official language is Turkish. The last one is from a fundamental undergraduate Computer Science course (data structures) at a local U.S. university taught in 2016 (henceforth CS2016).
Another reason we choose the student responses is that we have advanced annotation allowing us to perform an intrinsic evaluation to test whether the low-rank approximation does capture similar concepts or not. An example of the annotation is shown in Table TABREF15 , where phrases in the student responses that are semantically the same as the summary phrases are highlighted with the same color by human annotators. For example, “error bounding" (S2), “error boundary" (S4), “finding that error" (S3), and “determining the critical value for error" (S7) are semantically equivalent to “Error bounding" in the human summary. Details of the intrinsic evaluation are introduced in SECREF20 .
Product and peer reviews. The review data sets are provided by Xiong and Litman xiong-litman:2014:Coling, consisting of 3 categories. The first one is a subset of product reviews from a widely used data set in review opinion mining and sentiment analysis, contributed by Jindal and Liu jindal2008opinion. In particular, it randomly sampled 3 set of reviews from a representative product (digital camera), each with 18 reviews from an individual product type (e.g. “summarizing 18 camera reviews for Nikon D3200"). The second one is movie reviews crawled from IMDB.com by the authors themselves. The third one is peer reviews collected in a college-level history class from an online peer-review reciprocal system, SWoRD BIBREF71 . The average number of sentences per review set is 85 for camera reviews, 328 for movie reviews and 80 for peer review; the average number of words per sentence in the camera, movie, and peer reviews are 23, 24 and 19, respectively. The human summaries were collected in the form of online surveys (one survey per domain) hosted by Qualtrics. Each human summary contains 10 sentences from users' reviews. Example movie reviews are shown in Table TABREF17 .
News articles. Most summarization work focuses on news documents, as driven by the Document Understanding Conferences (DUC) and Text Analysis Conferences (TAC). For comparison, we select DUC 2004 to evaluate our approach (henceforth DUC04), which is widely used in the literature BIBREF72 , BIBREF73 , BIBREF74 , BIBREF75 , BIBREF76 . It consists of 50 clusters of Text REtrieval Conference (TREC) documents, from the following collections: AP newswire, 1998-2000; New York Times newswire, 1998-2000; Xinhua News Agency (English version), 1996-2000. Each cluster contained on average 10 documents. The task is to create a short summary ( INLINEFORM0 665 bytes) of each cluster. Example news sentences are shown in Table TABREF19 .
Experiments
In this section, we evaluate the proposed method intrinsically in terms of whether the co-occurrence matrix after the low-rank approximation is able to capture similar concepts on student response data sets, and also extrinsically in terms of the end task of summarization on all corpora. In the following experiments, summary length is set to be the average number of words in human summaries or less. For the matrix completion algorithm, we perform grid search (on a scale of [0, 5] with stepsize 0.5) to tune the hyper-parameter INLINEFORM0 (Eq. EQREF10 ) with a leave-one-lecture-out (for student responses) or leave-one-task-out (for others) cross-validation.
Intrinsic evaluation
When examining the imputed sentence-concept co-occurrence matrix, we notice some interesting examples that indicate the effectiveness of the proposed approach, shown in Table TABREF21 .
We want to investigate whether the matrix completion (MC) helps to capture similar concepts (i.e., bigrams). Recall that, if a bigram INLINEFORM0 is similar to another bigram in a sentence INLINEFORM1 , the sentence INLINEFORM2 should assign a partial score to the bigram INLINEFORM3 after the low-rank approximation. For instance, “The activity with the bicycle parts" should give a partial score to “bike elements" since it is similar to “bicycle parts". Note that, the co-occurrence matrix INLINEFORM4 measures whether a sentence includes a bigram or not. Without matrix completion, if a bigram INLINEFORM5 does not appear in a sentence INLINEFORM6 , INLINEFORM7 . After matrix completion, INLINEFORM8 ( INLINEFORM9 is the low-rank approximation matrix of INLINEFORM10 ) becomes a continuous number ranging from 0 to 1 (negative values are truncated). Therefore, INLINEFORM11 does not necessarily mean the sentence contains a similar bigram, since it might also give positive scores to non-similar bigrams. To solve this issue, we propose two different ways to test whether the matrix completion really helps to capture similar concepts.
H1.a: A bigram receives a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. That is, if a bigram INLINEFORM0 is similar to one of bigrams in a sentence INLINEFORM1 , but not similar to any bigram in another sentence INLINEFORM2 , then after matrix completion, INLINEFORM3 .
H1.b: A sentence gives higher partial scores to bigrams that are similar to its own bigrams than bigrams that are different from its own. That is, if a sentence INLINEFORM0 has a bigram that is similar to INLINEFORM1 , but none of its bigrams is similar to INLINEFORM2 , then, after matrix completion, INLINEFORM3 .
In order to test these two hypotheses, we need to construct gold-standard pairs of similar bigrams and pairs of different bigrams, which can be automatically obtained with the phrase-highlighting data (Table TABREF15 ). We first extract a candidate bigram from a phrase if and only if a single bigram can be extracted from the phrase. In this way, we discard long phrases if there are multiple candidate bigrams among them in order to avoid ambiguity as we cannot validate which of them match another target bigram. A bigram is defined as two words and at least one of them is not a stop word. We then extract every pair of candidate bigrams that are highlighted in the same color as similar bigrams. Similarly, we extract every pair of candidate bigrams that are highlighted as different colors as different bigrams. For example, “bias reduction" is a candidate phrase, which is similar to “bias correction" since they are in the same color.
To test H1.a, given a bigram INLINEFORM0 , a bigram INLINEFORM1 that is similar to it, and a bigram INLINEFORM2 that is different from it, we can select the bigram INLINEFORM3 , and the sentence INLINEFORM4 that contains INLINEFORM5 , and the sentence INLINEFORM6 that contains INLINEFORM7 . We ignore INLINEFORM8 if it contains any other bigram that is similar to INLINEFORM9 to eliminate the compounded case that both similar and different bigrams are within one sentence. Note, if there are multiple sentences containing INLINEFORM10 , we consider each of them. In this way, we construct a triple INLINEFORM11 , and test whether INLINEFORM12 . To test H1.b, for each pair of similar bigrams INLINEFORM13 , and different bigrams INLINEFORM14 , we select the sentence INLINEFORM15 that contains INLINEFORM16 so that we construct a triple INLINEFORM17 , and test whether INLINEFORM18 . We also filtered out INLINEFORM19 that contains similar bigram(s) to INLINEFORM20 to remove the compounded effect. In this way, we collected a gold-standard data set to test the two hypotheses above as shown in Table TABREF24 .
The results are shown in Table TABREF25 . INLINEFORM0 significantly on all three courses. That is, a bigram does receive a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. Therefore, H1.a holds. For H1.b, we only observe INLINEFORM1 significantly on Stat2016 and there is no significant difference between INLINEFORM2 and INLINEFORM3 on the other two courses. First, the gold-standard data set is still small in the sense that only a limited portion of bigrams in the entire data set are evaluated. Second, the assumption that phrases annotated by different colors are not necessarily unrelated is too strong. For example, “hypothesis testing" and “H1 and Ho conditions" are in different colors in the example of Table TABREF15 , but one is a subtopic of the other. An alternative way to evaluate the hypothesis is to let humans judge whether two bigrams are similar or not, which we leave for future work. Third, the gold standards are pairs of semantically similar bigrams, while matrix completion captures bigrams that occurs in a similar context, which is not necessarily equivalent to semantic similarity. For example, the sentence “graphs make it easier to understand concepts" in Table TABREF25 is associated with “hard to".
Extrinsic evaluation
Our proposed approach is compared against a range of baselines. They are 1) MEAD BIBREF30 , a centroid-based summarization system that scores sentences based on length, centroid, and position; 2) LexRank BIBREF29 , a graph-based summarization approach based on eigenvector centrality; 3) SumBasic BIBREF77 , an approach that assumes words occurring frequently in a document cluster have a higher chance of being included in the summary; 4) Pointer-Generator Networks (PGN) BIBREF16 , a state-of-the-art neural encoder-decoder approach for abstractive summarization. The system was trained on the CNN/Daily Mail data sets BIBREF78 , BIBREF14 . 5) ILP BIBREF21 , a baseline ILP framework without matrix completion.
The Pointer-Generator Networks BIBREF16 describes a neural encoder-decoder architecture. It encourages the system to copy words from the source text via pointing, while retaining the ability to produce novel words through the generator. It also contains a coverage mechanism to keep track of what has been summarized, thus reducing word repetition. The pointer-generator networks have not been tested for summarizing content contributed by multiple authors. In this study we evaluate their performance on our collection of datasets.
For the ILP-based approaches, we use bigrams as concepts (bigrams consisting of only stopwords are removed) and term frequency as concept weights. We leverage the co-occurrence statistics both within and across the entire corpus. We also filtered out bigrams that appear only once in each corpus, yielding better ROUGE scores with lower computational cost. The results without using this low-frequency filtering are shown in the Appendix for comparison. In Table TABREF26 , we present summarization results evaluated by ROUGE BIBREF72 and human judges.
To compare with the official participants in DUC 2004 BIBREF79 , we selected the top-5 systems submitted in the competition (ranked by R-1), together with the 8 human annotators. The results are presented in Table TABREF27 .
ROUGE. It is a recall-oriented metric that compares system and reference summaries based on n-gram overlaps, which is widely used in summarization evaluation. In this work, we report ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-SU4 (R-SU4), and ROUGE-L (R-L) scores, which respectively measure the overlap of unigrams, bigrams, skip-bigram (with a maximum gap length of 4), and longest common subsequence. First, there is no winner for all data sets. MEAD is the best one on camera; SumBasic is best on Stat2016 and mostly on Stat2015; ILP is best on DUC04. The ILP baseline is comparable to the best participant (Table TABREF27 ) and even has the best R-2. PGN is the worst, which is not surprising since it is trained on a different data set, which may not generalize to our data sets. Our method ILP+MC is best on peer review and mostly on Eng and CS2016. Second, compared with ILP, our method works better on Eng, CS2016, movie, and peer.
These results show our proposed method does not always better than the ILP framework, and no single summarization system wins on all data sets. It is perhaps not surprising to some extent. The no free lunch theorem for machine learning BIBREF80 states that, averaged overall possible data-generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other BIBREF81 .
Human Evaluation. Because ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries, we further perform a human evaluation. For each task, we present a pair of system outputs in a random order, together with one human summary to five Amazon turkers. If there are multiple human summaries, we will present each human summary and the pair of system outputs to turkers. For student responses, we also present the prompt. An example Human Intelligence Task (HIT) is illustrated in Fig. FIGREF32 .
The turkers are asked to indicate their preference for system A or B based on the semantic resemblance to the human summary on a 5-Likert scale (`Strongly preferred A', `Slightly preferred A', `No preference', `Slightly preferred B', `Strongly preferred B'). They are rewarded $0.04 per task. We use two strategies to control the quality of the human evaluation. First, we require the turkers to have a HIT approval rate of 90% or above. Second, we insert some quality checkpoints by asking the turkers to compare two summaries of same text content but in different sentence orders. Turkers who did not pass these tests are filtered out. Due to budget constraints, we conduct pairwise comparisons for three systems. The total number of comparisons is 3 system-system pairs INLINEFORM0 5 turkers INLINEFORM1 (36 tasks INLINEFORM2 1 human summaries for Eng + 44 INLINEFORM3 2 for Stat2015 + 48 INLINEFORM4 2 for Stat2016 + 46 INLINEFORM5 2 for CS2016 + 3 INLINEFORM6 8 for camera + 3 INLINEFORM7 5 for movie + 3 INLINEFORM8 2 for peer + 50 INLINEFORM9 4 for DUC04) = 8,355. The number of tasks for each corpus is shown in Table TABREF14 . To elaborate as an example, for Stat2015, there are 22 lectures and 2 prompts for each lecture. Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary. Therefore, we have 44 INLINEFORM11 2 INLINEFORM12 3=264 HITs for Stat2015. Each HIT will be done by 5 different turkers, resulting in 264 INLINEFORM13 5=1,320 comparisons. In total, 306 unique turkers were recruited and on average 27.3 of HITs were completed by one turker. The distribution of the human preference scores is shown in Fig. FIGREF34 .
We calculate the percentage of “wins” (strong or slight preference) for each system among all comparisons with its counterparts. Results are reported in the last column of Table TABREF26 . ILP+MC is preferred significantly more often than ILP on Stat2015, CS2016, and DUC04. There is no significant difference between ILP+MC and SumBasic on student response data sets. Interestingly, a system with better ROUGE scores does not necessarily mean it is more preferred by humans. For example, ILP is preferred more on all three review data sets. Regarding the inter-annotator agreement, we find 48.5% of the individual judgements agree with the majority votes. The agreement scores decomposed by data sets and system pairs are shown in Table TABREF35 . Overall, the agreement scores are pretty low, compared to an agreement score achieved by randomly clicking (45.7%). It has several possibilities. The first one is that many turkers did click randomly (39 out of 160 failed our quality checkpoints). Unfortunately, we did not check all the turkers as we inserted the checkpoints randomly. The second possibility is that comparing two system summaries is difficult for humans, and thus it has a low agreement score. Xiong and Litman xiong-litman:2014:Coling also found that it is hard to make humans agree on the choice of summary sentences. A third possibility is that turkers needed to see the raw input sentences which are not shown in a HIT.
An interesting observation is that our approach produces summaries with more sentences, as shown in Table TABREF39 . The number of words in the summaries is approximately the same for all methods for a particular corpus, which is constrained by Eq. . For camera, movie and peer reviews, the number of sentences in human summary is 10, and SumBasic and ILP+MC produce more sentences than ILP. It is hard for people to judge which system summaries is closer to a human summary when the summaries are long (216, 242, and 190 words for camera, movie, and peer reviews respectively). For inter-annotator agreement, 50.3% of judgements agree with the majority votes for student response data sets, 47.6% for reviews, and only 46.3% for news documents. We hypothesize that for these long summaries, people may prefer short system summaries, and for short summaries, people may prefer long system summaries. We leave the examination of this finding to future work.
Table TABREF40 presents example system outputs. This offers an intuitive understanding of our proposed approach.
Analysis of Influential Factors
In this section, we want to investigate the impact of the low-rank approximation process to the ILP framework. Therefore, in the following experiments, we focus on the direct comparison with the ILP and ILP+MC and leave the comparison to other baselines as future work. The proposed method achieved better summarization performance on Eng, CS2016, movie, and peer than the ILP baseline. Unfortunately, it does not work as expected on two courses for student responses (Stat2015 and Stat2016), review camera and news documents. This leaves the research question when and why the proposed method works better. In order to investigate what are key factors that impact the performance, we would like to perform additional experiments using synthesized data sets.
A variety of attributes that might impact the performance are summarized in Table TABREF41 , categorized into two types. The input attributes are extracted from the input original documents and the summaries attributes are extracted from human summaries and the input documents as well. Here are some important attributes we expect to have a big impact on the performance.
The attributes extracted from the corpora are shown in Table TABREF42 . Note, a bigram that appears more often in original documents has a better chance to be included in human summaries as indicated by INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 . This verifies our choice to cut low-frequency bigrams.
According to the ROUGE scores, our method works better on Eng, CS2016, movie, and peer (Table TABREF26 ). If we group each attribute into two groups, corresponding to whether ILP+MC works better, we do not find significant differences among these attributes. To further understand which factors impact the performance and have more predictive power, we train a binary classification decision tree by treating the 4 working corpora as positive examples and the remaining 4 as negative examples.
According to the decision tree model, there is only one decision point in the tree: INLINEFORM0 , the ratio of bigrams in human summaries that are in the input only once. Generally, our proposed method works if INLINEFORM1 , except for camera. When INLINEFORM2 is low, it means that annotators either adopt concepts that appear multiple times or just use their own. In this case, the frequency-based weighting (i.e., INLINEFORM3 in Eq. EQREF5 ) can capture the concepts that appear multiple times. On the other hand, when INLINEFORM4 is high, it means that a big number of bigrams appeared only once in the input document. In this case, annotators have difficulty selecting a representative one due to the ambiguous choice. Therefore, we hypothesize,
To test the predictive power of this attribute, we want to test it on new data sets. Unfortunately, creating new data sets with gold-standard human summaries is expensive and time-consuming, and the new data set may not have the desired property within a certain range of INLINEFORM0 . Therefore, we propose to manipulate the ratio and create new data sets using the existing data sets without additional human annotation. INLINEFORM1 can be represented as follows, DISPLAYFORM0
where INLINEFORM0 INLINEFORM1
There are two different ways to control the ratio, both involving removing input sentences with certain constraints.
In this way, we obtained different levels of INLINEFORM0 by deleting sentences. The ROUGE scores on the synthesized corpus are shown in Table TABREF52 .
Our hypothesis H2 is partially valid. When increasing the ratio, ILP+MC has a relative advantage gain over ILP. For example, for Stat2015, ILP+MC is not significantly worse than ILP any more when increasing the ratio from 11.9 to 18.1. For camera, ILP+MC becomes better than ILP when increasing the ratio from 84.9 to 85.8. For Stat2016, CS2016, Eng, more improvements or significant improvements can be found for ILP+MC compared to ILP when increasing the ratio. However, for movie and peer review, ILP+MC is worse than ILP when increasing the ratio.
We have investigated a number of attributes that might impact the performance of our proposed method. Unfortunately, we do not have a conclusive answer when our method works better. However, we would like to share some thoughts about it.
First, our proposed method works better on two student responses courses (Eng and CS2016), but not the other two (Stat2015 and Stat2016). An important factor we ignored is that the students from the other two courses are not native English speakers, resulting in significantly shorter responses (4.3 INLINEFORM0 6.0 INLINEFORM1 8.8, 9.1, INLINEFORM2 , Table TABREF42 , the row with id=11). With shorter sentences, there will be less context to leverage the low-rank approximation.
Second, our proposed method works better on movie and peer reviews, but not camera reviews. As pointed out by Xiong xiong2015helpfulness, both movie reviews and peer reviews are potentially more complicated than the camera reviews, as the review content consists of both the reviewer's evaluations of the subject (e.g., a movie or paper) and the reviewer's references of the subject, where the subject itself is full of content (e.g., movie plot, papers). In contrast, such references in product reviews are usually the mentions of product components or properties, which have limited variations. This characteristic makes review summarization more challenging in these two domains.
Conclusion
We made the first effort to summarize student feedback using an Integer Linear Programming framework with a low-rank matrix approximation, and applied it to different types of data sets including news articles, product, and peer reviews. Our approach allows sentences to share co-occurrence statistics and alleviates sparsity issue. Our experiments showed that the proposed approach performs better against a range of baselines on the student response Eng and CS2016 on ROUGE scores, but not other courses.
ROUGE is often adopted in research papers to evaluate the quality of summarization because it is fast and is correlated well to human evaluation BIBREF72 , BIBREF82 . However, it is also criticized that ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries. Different alternatives have been proposed to enhance ROUGE. For example, Graham rankel2016statistical proposed to use content-oriented features in conjunction with linguistic features. Similarly, Cohan and Goharian COHAN16.1144 proposed to use content relevance. At the same time, many researchers supplement ROUGE with a manual evaluation. This is why we conduct evaluations using both ROUGE and human evaluation in this work.
However, we found that a system with better ROUGE scores does not necessarily mean it is more preferred by humans (§ SECREF28 ). For example, ILP is preferred more on all three review data sets even if it got lower ROUGE scores than the other systems. It coincides with the fact that the ILP generated shorter summaries in terms of the number of sentences than the other two systems (Table TABREF39 ).
We also investigated a variety of attributes that might impact the performance on a range of data sets. Unfortunately, we did not have a conclusive answer when our method will work better.
In the future, we would like to conduct a large-scale intrinsic evaluation to examine whether the low-rank matrix approximation captures similar bigrams or not and want to investigate more attributes, such as new metrics for diversity. We would like to explore the opportunities by combing a vector sentence representation learned by a neural network and the ILP framework. | low-rank approximation of the co-occurrence matrix |
e84ba95c9a188fda4563f45e53fbc8728d8b5dab | e84ba95c9a188fda4563f45e53fbc8728d8b5dab_0 | Q: Do they build one model per topic or on all topics?
Text: Introduction
Summarization is a promising technique for reducing information overload. It aims at converting long text documents to short, concise summaries conveying the essential content of the source documents BIBREF0 . Extractive methods focus on selecting important sentences from the source and concatenating them to form a summary, whereas abstractive methods can involve a number of high-level text operations such as word reordering, paraphrasing, and generalization BIBREF1 . To date, summarization has been successfully exploited for a number of text domains, including news articles BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , product reviews BIBREF6 , online forum threads BIBREF7 , meeting transcripts BIBREF8 , scientific articles BIBREF9 , BIBREF10 , student course responses BIBREF11 , BIBREF12 , and many others.
Summarizing content contributed by multiple authors is particularly challenging. This is partly because people tend to use different expressions to convey the same semantic meaning. In a recent study of summarizing student responses to post-class reflective questions, Luo et al., Luo:2016:NAACL observe that the students use distinct lexical items such as “bike elements” and “bicycle parts” to refer to the same concept. The student responses frequently contain expressions with little or no word overlap, such as “the main topics of this course” and “what we will learn in this class,” when they are prompted with “describe what you found most interesting in today's class.” A similar phenomenon has also been observed in the news domain, where reporters use different nicknames, e.g., “Bronx Zoo” and “New York Highlanders,” to refer to the baseball team “New York Yankees.” Luo et al., Luo:2016:NAACL report that about 80% of the document bigrams occur only once or twice for the news domain, whereas the ratio is 97% for student responses, suggesting the latter domain has a higher level of lexical diversity. When source documents contain diverse expressions conveying the same meaning, it can hinder the summarization system's ability to effectively identify salient content from the source documents. It can also increase the summary redundancy if lexically-distinct but semantically-similar expressions are included in the summary.
Existing neural encoder-decoder models may not work well at summarizing such content with high lexical variety BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . On one hand, training the neural sequence-to-sequence models requires a large amount of parallel data. The cost of annotating gold-standard summaries for many domains such as student responses can be prohibitive. Without sufficient labelled data, the models can only be trained on automatically gathered corpora, where an instance often includes a news article paired with its title or a few highlights. On the other hand, the summaries produced by existing neural encoder-decoder models are far from perfect. The summaries are mostly extractive with minor edits BIBREF16 , contain repetitive words and phrases BIBREF17 and may not accurately reproduce factual details BIBREF18 , BIBREF19 . We examine the performance of a state-of-the-art neural summarization model in Section § SECREF28 .
In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occurrence matrix, and further evaluate the approach on a broad range of datasets exhibiting high lexical diversity. The ILP framework, being extractive in nature, has demonstrated considerable success on a number of summarization tasks BIBREF20 , BIBREF21 . It generates a summary by selecting a set of sentences from the source documents. The sentences shall maximize the coverage of important source content, while minimizing the redundancy among themselves. At the heart of the algorithm is a sentence-concept co-occurrence matrix, used to determine if a sentence contains important concepts and whether two sentences share the same concepts. We introduce a low-rank approximation to the co-occurrence matrix and optimize it using the proximal gradient method. The resulting system thus allows different sentences to share co-occurrence statistics. For example, “The activity with the bicycle parts" will be allowed to partially contain “bike elements" although the latter phrase does not appear in the sentence. The low-rank matrix approximation provides an effective way to implicitly group lexically-diverse but semantically-similar expressions. It can handle out-of-vocabulary expressions and domain-specific terminologies well, hence being a more principled approach than heuristically calculating similarities of word embeddings.
Our research contributions of this work include the following.
In the following sections we first present a thorough review of the related work (§ SECREF2 ), then introduce our ILP summarization framework (§ SECREF3 ) with a low-rank approximation of the co-occurrence matrix optimized using the proximal gradient method (§ SECREF4 ). Experiments are performed on a collection of eight datasets (§ SECREF5 ) containing student responses to post-class reflective questions, product reviews, peer reviews, and news articles. Intrinsic evaluation (§ SECREF20 ) shows that the low-rank approximation algorithm can effectively group distinct expressions used in similar semantic context. For extrinsic evaluation (§ SECREF28 ) our proposed framework obtains competitive results in comparison to state-of-the-art summarization systems. Finally, we conduct comprehensive studies analyzing the characteristics of the datasets and suggest critical factors that affect the summarization performance (§ SECREF7 ).
Related Work
Extractive summarization has undergone great development over the past decades. It focuses on extracting relevant sentences from a single document or a cluster of documents related to a particular topic. Various techniques have been explored, including maximal marginal relevance BIBREF22 , submodularity BIBREF23 , integer linear programming BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF4 , minimizing reconstruction error BIBREF28 , graph-based models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , determinantal point processes BIBREF33 , neural networks and reinforcement learning BIBREF34 , BIBREF35 among others. Nonetheless, most studies are bound to a single dataset and few approaches have been evaluated in a cross-domain setting. In this paper, we propose an enhanced ILP framework and evaluate it on a broad range of datasets. We present an in-depth analysis of the dataset characteristics derived from both source documents and reference summaries to understand how domain-specific factors may affect the applicability of the proposed approach.
Neural summarization has seen promising improvements in recent years with encoder-decoder models BIBREF13 , BIBREF14 . The encoder condenses the source text to a dense vector, whereas the decoder unrolls the vector to a summary sequence by predicting one word at a time. A number of studies have been proposed to deal with out-of-vocabulary words BIBREF16 , improve the attention mechanism BIBREF36 , BIBREF37 , BIBREF38 , avoid generating repetitive words BIBREF16 , BIBREF17 , adjust summary length BIBREF39 , encode long text BIBREF40 , BIBREF41 and improve the training objective BIBREF42 , BIBREF15 , BIBREF43 . To date, these studies focus primarily on single-document summarization and headline generation. This is partly because training neural encoder-decoder models requires a large amount of parallel data, yet the cost of annotating gold-standard summaries for most domains can be prohibitive. We validate the effectiveness of a state-of-the-art neural summarization system BIBREF16 on our collection of datasets and report results in § SECREF28 .
In this paper we focus on the integer linear programming-based summarization framework and propose enhancements to it to summarize text content with high lexical diversity. The ILP framework is shown to perform strongly on extractive summarization BIBREF20 , BIBREF44 , BIBREF21 . It produces an optimal selection of sentences that (i) maximize the coverage of important concepts discussed in the source, (ii) minimize the redundancy in pairs of selected sentences, and (iii) ensure the summary length does not exceed a limit. Previous work has largely focused on improving the estimation of concept weights in the ILP framework BIBREF45 , BIBREF46 , BIBREF47 , BIBREF48 , BIBREF4 . However, distinct lexical items such as “bike elements” and “bicycle parts” are treated as different concepts and their weights are not shared. In this paper we overcome this issue by proposing a low-rank approximation to the sentence-concept co-occurrence matrix to intrinsically group lexically-distinct but semantically-similar expressions; they are considered as a whole when maximizing concept coverage and minimizing redundancy.
Our work is also different from the traditional approaches using dimensionality reduction techniques such as non-negative matrix factorization (NNMF) and latent semantic analysis (LSA) for summarization BIBREF49 , BIBREF50 , BIBREF51 , BIBREF52 , BIBREF53 . In particular, Wang et al. wang2008multi use NNMF to group sentences into clusters; Conroy et al. conroy-EtAl:2013:MultiLing explore NNMF and LSA to obtain better estimates of term weights; Wang et al. wang2016low use low-rank approximation to cast sentences and images to the same embedding space. Different from the above methods, our proposed framework focuses on obtaining a low-rank approximation of the co-occurrence matrix embedded in the ILP framework, so that diverse expressions can share co-occurrence frequencies. Note that out-of-vocabulary expressions and domain-specific terminologies are abundant in our datasets, therefore simply calculating the lexical overlap BIBREF54 or cosine similarity of word embeddings BIBREF55 cannot serve our goal well.
This manuscript extends our previous work on summarizing student course responses BIBREF11 , BIBREF56 , BIBREF12 submitted after each lecture via a mobile app named CourseMIRROR BIBREF57 , BIBREF58 , BIBREF59 . The students are asked to respond to reflective prompts such as “describe what you found most interesting in today's class” and “describe what was confusing or needed more detail.” For large classes with hundreds of students, it can be quite difficult for instructors to manually analyze the student responses, hence the help of automatic summarization. Our extensions of this work are along three dimensions: (i) we crack the “black-box” of the low-rank approximation algorithm to understand if it indeed allows lexically-diverse but semantically-similar items to share co-occurrence statistics; (ii) we compare the ILP-based summarization framework with state-of-the-art baselines, including a popular neural encoder-decoder model for summarization; (iii) we expand the student feedback datasets to include responses collected from materials science and engineering, statistics for industrial engineers, and data structures. We additionally experiment with reviews and news articles. Analyzing the unique characteristics of each dataset allows us to identify crucial factors influencing the summarization performance.
With the fast development of Massive Open Online Courses (MOOC) platforms, more attention is being dedicated to analyzing educationally-oriented language data. These studies seek to identify student leaders from MOOC discussion forums BIBREF60 , perform sentiment analysis on student discussions BIBREF61 , improve student engagement and reducing student retention BIBREF62 , BIBREF63 , and using language generation techniques to automatically generate feedback to students BIBREF64 . Our focus of this paper is to automatically summarizing student responses so that instructors can collect feedback in a timely manner. We expect the developed summarization techniques and result analysis will further summarization research in similar text genres exhibiting high lexical variety.
ILP Formulation
Let INLINEFORM0 be a set of documents that consist of INLINEFORM1 sentences in total. Let INLINEFORM2 , INLINEFORM3 indicate if a sentence INLINEFORM4 is selected ( INLINEFORM5 ) or not ( INLINEFORM6 ) in the summary. Similarly, let INLINEFORM7 be the number of unique concepts in INLINEFORM8 . INLINEFORM9 , INLINEFORM10 indicate the appearance of concepts in the summary. Each concept INLINEFORM11 is assigned a weight of INLINEFORM12 , often measured by the number of sentences or documents that contain the concept. The ILP-based summarization approach BIBREF20 searches for an optimal assignment to the sentence and concept variables so that the selected summary sentences maximize coverage of important concepts. The relationship between concepts and sentences is captured by a co-occurrence matrix INLINEFORM13 , where INLINEFORM14 indicates the INLINEFORM15 -th concept appears in the INLINEFORM16 -th sentence, and INLINEFORM17 otherwise. In the literature, bigrams are frequently used as a surrogate for concepts BIBREF24 , BIBREF21 . We follow the convention and use `concept' and `bigram' interchangeably in this paper.
Two sets of linear constraints are specified to ensure the ILP validity: (1) a concept is selected if and only if at least one sentence carrying it has been selected (Eq. ), and (2) all concepts in a sentence will be selected if that sentence is selected (Eq. ). Finally, the selected summary sentences are allowed to contain a total of INLINEFORM0 words or less (Eq. ). DISPLAYFORM0
The above ILP can be transformed to matrix representation: DISPLAYFORM0
We use boldface letters to represent vectors and matrices. INLINEFORM0 is an auxiliary matrix created by horizontally stacking the concept vector INLINEFORM1 INLINEFORM2 times. Constraint set (Eq. ) specifies that a sentence is selected indicates that all concepts it carries have been selected. It corresponds to INLINEFORM3 constraints of the form INLINEFORM4 , where INLINEFORM5 .
As far as we know, this is the first-of-its-kind matrix representation of the ILP framework. It clearly shows the two important components of this framework, including 1) the concept-sentence co-occurrence matrix INLINEFORM0 , and 2) concept weight vector INLINEFORM1 . Existing work focus mainly on generating better estimates of concept weights ( INLINEFORM2 ), while we focus on improving the co-occurrence matrix INLINEFORM3 .
Our Approach
Because of the lexical diversity problem, we suspect the co-occurrence matrix INLINEFORM0 may not establish a faithful correspondence between sentences and concepts. A concept may be conveyed using multiple bigram expressions; however, the current co-occurrence matrix only captures a binary relationship between sentences and bigrams. For example, we ought to give partial credit to “bicycle parts” given that a similar expression “bike elements” appears in the sentence. Domain-specific synonyms may be captured as well. For example, the sentence “I tried to follow along but I couldn't grasp the concepts” is expected to partially contain the concept “understand the”, although the latter did not appear in the sentence.
The existing matrix INLINEFORM0 is highly sparse. Only 3.7% of the entries are non-zero in the student response data sets on average (§ SECREF5 ). We therefore propose to impute the co-occurrence matrix by filling in missing values (i.e., matrix completion). This is accomplished by approximating the original co-occurrence matrix using a low-rank matrix. The low-rankness encourages similar concepts to be shared across sentences.
The ILP with a low-rank approximation of the co-occurrence matrix can be formalized as follows. DISPLAYFORM0
The low-rank approximation process makes two notable changes to the existing ILP framework.
Concretely, given the co-occurrence matrix INLINEFORM0 , we aim to find a low-rank matrix INLINEFORM1 whose values are close to INLINEFORM2 at the observed positions. Our objective function is DISPLAYFORM0
where INLINEFORM0 represents the set of observed value positions. INLINEFORM1 denotes the trace norm of INLINEFORM2 , i.e., INLINEFORM3 , where INLINEFORM4 is the rank of INLINEFORM5 and INLINEFORM6 are the singular values. By defining the following projection operator INLINEFORM7 , DISPLAYFORM0
our objective function (Eq. EQREF10 ) can be succinctly represented as DISPLAYFORM0
where INLINEFORM0 denotes the Frobenius norm.
Following Mazumder et al. Mazumder:2010, we optimize Eq. EQREF12 using the proximal gradient descent algorithm. The update rule is DISPLAYFORM0
where INLINEFORM0 is the step size at iteration k and the proximal function INLINEFORM1 is defined as the singular value soft-thresholding operator, INLINEFORM2 , where INLINEFORM3 is the singular value decomposition (SVD) of INLINEFORM4 and INLINEFORM5 .
Since the gradient of INLINEFORM0 is Lipschitz continuous with INLINEFORM1 ( INLINEFORM2 is the Lipschitz continuous constant), we follow Mazumder et al. Mazumder:2010 to choose fixed step size INLINEFORM3 , which has a provable convergence rate of INLINEFORM4 , where INLINEFORM5 is the number of iterations.
Datasets
To demonstrate the generality of the proposed approach, we consider three distinct types of corpora, ranging from student response data sets from four different courses to three sets of reviews to one benchmark of news articles. The corpora are summarized in Table TABREF14 .
Student responses. Research has explored using reflection prompts/muddy cards/one-minute papers to promote and collect reflections from students BIBREF65 , BIBREF66 , BIBREF67 . However, it is expensive and time consuming for humans to summarize such feedback. It is therefore desirable to automatically summarize the student feedback produced in online and offline environments, although it is only recently that a data collection effort to support such research has been initiated BIBREF58 , BIBREF57 . In our data, one particular type of student response is considered, named “reflective feedback” BIBREF68 , which has been shown to enhance interaction between instructors and students by educational researchers BIBREF69 , BIBREF70 . More specifically, students are presented with the following prompts after each lecture and asked to provide responses: 1) “describe what you found most interesting in today's class,” 2) “describe what was confusing or needed more detail,” and 3) “describe what you learned about how you learn.” These open-ended prompts are carefully designed to encourage students to self-reflect, allowing them to “recapture experience, think about it and evaluate it" BIBREF68 .
To test generality, we gathered student responses from four different courses, as shown in Table TABREF14 . The first one was collected by Menekse et al. Menekse:2011 using paper-based surveys from an introductory materials science and engineering class (henceforth Eng) taught in a major U.S. university, and a subset is made public by us BIBREF11 , available at the link: http://www.coursemirror.com/download/dataset. The remaining three courses are collected by us using a mobile application, CourseMIRROR BIBREF57 , BIBREF58 and then the reference summaries for each course are created by human annotators with the proper background. The human annotators are allowed to create abstract summaries using their own words in addition to selecting phrases directly from the responses. While the 2nd and 3rd data sets are from the same course, Statistics for Industrial Engineers, they were taught in 2015 and 2016 respectively (henceforth Stat2015 and Stat2016), at the Boǧaziçi University in Turkey. The course was taught in English while the official language is Turkish. The last one is from a fundamental undergraduate Computer Science course (data structures) at a local U.S. university taught in 2016 (henceforth CS2016).
Another reason we choose the student responses is that we have advanced annotation allowing us to perform an intrinsic evaluation to test whether the low-rank approximation does capture similar concepts or not. An example of the annotation is shown in Table TABREF15 , where phrases in the student responses that are semantically the same as the summary phrases are highlighted with the same color by human annotators. For example, “error bounding" (S2), “error boundary" (S4), “finding that error" (S3), and “determining the critical value for error" (S7) are semantically equivalent to “Error bounding" in the human summary. Details of the intrinsic evaluation are introduced in SECREF20 .
Product and peer reviews. The review data sets are provided by Xiong and Litman xiong-litman:2014:Coling, consisting of 3 categories. The first one is a subset of product reviews from a widely used data set in review opinion mining and sentiment analysis, contributed by Jindal and Liu jindal2008opinion. In particular, it randomly sampled 3 set of reviews from a representative product (digital camera), each with 18 reviews from an individual product type (e.g. “summarizing 18 camera reviews for Nikon D3200"). The second one is movie reviews crawled from IMDB.com by the authors themselves. The third one is peer reviews collected in a college-level history class from an online peer-review reciprocal system, SWoRD BIBREF71 . The average number of sentences per review set is 85 for camera reviews, 328 for movie reviews and 80 for peer review; the average number of words per sentence in the camera, movie, and peer reviews are 23, 24 and 19, respectively. The human summaries were collected in the form of online surveys (one survey per domain) hosted by Qualtrics. Each human summary contains 10 sentences from users' reviews. Example movie reviews are shown in Table TABREF17 .
News articles. Most summarization work focuses on news documents, as driven by the Document Understanding Conferences (DUC) and Text Analysis Conferences (TAC). For comparison, we select DUC 2004 to evaluate our approach (henceforth DUC04), which is widely used in the literature BIBREF72 , BIBREF73 , BIBREF74 , BIBREF75 , BIBREF76 . It consists of 50 clusters of Text REtrieval Conference (TREC) documents, from the following collections: AP newswire, 1998-2000; New York Times newswire, 1998-2000; Xinhua News Agency (English version), 1996-2000. Each cluster contained on average 10 documents. The task is to create a short summary ( INLINEFORM0 665 bytes) of each cluster. Example news sentences are shown in Table TABREF19 .
Experiments
In this section, we evaluate the proposed method intrinsically in terms of whether the co-occurrence matrix after the low-rank approximation is able to capture similar concepts on student response data sets, and also extrinsically in terms of the end task of summarization on all corpora. In the following experiments, summary length is set to be the average number of words in human summaries or less. For the matrix completion algorithm, we perform grid search (on a scale of [0, 5] with stepsize 0.5) to tune the hyper-parameter INLINEFORM0 (Eq. EQREF10 ) with a leave-one-lecture-out (for student responses) or leave-one-task-out (for others) cross-validation.
Intrinsic evaluation
When examining the imputed sentence-concept co-occurrence matrix, we notice some interesting examples that indicate the effectiveness of the proposed approach, shown in Table TABREF21 .
We want to investigate whether the matrix completion (MC) helps to capture similar concepts (i.e., bigrams). Recall that, if a bigram INLINEFORM0 is similar to another bigram in a sentence INLINEFORM1 , the sentence INLINEFORM2 should assign a partial score to the bigram INLINEFORM3 after the low-rank approximation. For instance, “The activity with the bicycle parts" should give a partial score to “bike elements" since it is similar to “bicycle parts". Note that, the co-occurrence matrix INLINEFORM4 measures whether a sentence includes a bigram or not. Without matrix completion, if a bigram INLINEFORM5 does not appear in a sentence INLINEFORM6 , INLINEFORM7 . After matrix completion, INLINEFORM8 ( INLINEFORM9 is the low-rank approximation matrix of INLINEFORM10 ) becomes a continuous number ranging from 0 to 1 (negative values are truncated). Therefore, INLINEFORM11 does not necessarily mean the sentence contains a similar bigram, since it might also give positive scores to non-similar bigrams. To solve this issue, we propose two different ways to test whether the matrix completion really helps to capture similar concepts.
H1.a: A bigram receives a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. That is, if a bigram INLINEFORM0 is similar to one of bigrams in a sentence INLINEFORM1 , but not similar to any bigram in another sentence INLINEFORM2 , then after matrix completion, INLINEFORM3 .
H1.b: A sentence gives higher partial scores to bigrams that are similar to its own bigrams than bigrams that are different from its own. That is, if a sentence INLINEFORM0 has a bigram that is similar to INLINEFORM1 , but none of its bigrams is similar to INLINEFORM2 , then, after matrix completion, INLINEFORM3 .
In order to test these two hypotheses, we need to construct gold-standard pairs of similar bigrams and pairs of different bigrams, which can be automatically obtained with the phrase-highlighting data (Table TABREF15 ). We first extract a candidate bigram from a phrase if and only if a single bigram can be extracted from the phrase. In this way, we discard long phrases if there are multiple candidate bigrams among them in order to avoid ambiguity as we cannot validate which of them match another target bigram. A bigram is defined as two words and at least one of them is not a stop word. We then extract every pair of candidate bigrams that are highlighted in the same color as similar bigrams. Similarly, we extract every pair of candidate bigrams that are highlighted as different colors as different bigrams. For example, “bias reduction" is a candidate phrase, which is similar to “bias correction" since they are in the same color.
To test H1.a, given a bigram INLINEFORM0 , a bigram INLINEFORM1 that is similar to it, and a bigram INLINEFORM2 that is different from it, we can select the bigram INLINEFORM3 , and the sentence INLINEFORM4 that contains INLINEFORM5 , and the sentence INLINEFORM6 that contains INLINEFORM7 . We ignore INLINEFORM8 if it contains any other bigram that is similar to INLINEFORM9 to eliminate the compounded case that both similar and different bigrams are within one sentence. Note, if there are multiple sentences containing INLINEFORM10 , we consider each of them. In this way, we construct a triple INLINEFORM11 , and test whether INLINEFORM12 . To test H1.b, for each pair of similar bigrams INLINEFORM13 , and different bigrams INLINEFORM14 , we select the sentence INLINEFORM15 that contains INLINEFORM16 so that we construct a triple INLINEFORM17 , and test whether INLINEFORM18 . We also filtered out INLINEFORM19 that contains similar bigram(s) to INLINEFORM20 to remove the compounded effect. In this way, we collected a gold-standard data set to test the two hypotheses above as shown in Table TABREF24 .
The results are shown in Table TABREF25 . INLINEFORM0 significantly on all three courses. That is, a bigram does receive a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. Therefore, H1.a holds. For H1.b, we only observe INLINEFORM1 significantly on Stat2016 and there is no significant difference between INLINEFORM2 and INLINEFORM3 on the other two courses. First, the gold-standard data set is still small in the sense that only a limited portion of bigrams in the entire data set are evaluated. Second, the assumption that phrases annotated by different colors are not necessarily unrelated is too strong. For example, “hypothesis testing" and “H1 and Ho conditions" are in different colors in the example of Table TABREF15 , but one is a subtopic of the other. An alternative way to evaluate the hypothesis is to let humans judge whether two bigrams are similar or not, which we leave for future work. Third, the gold standards are pairs of semantically similar bigrams, while matrix completion captures bigrams that occurs in a similar context, which is not necessarily equivalent to semantic similarity. For example, the sentence “graphs make it easier to understand concepts" in Table TABREF25 is associated with “hard to".
Extrinsic evaluation
Our proposed approach is compared against a range of baselines. They are 1) MEAD BIBREF30 , a centroid-based summarization system that scores sentences based on length, centroid, and position; 2) LexRank BIBREF29 , a graph-based summarization approach based on eigenvector centrality; 3) SumBasic BIBREF77 , an approach that assumes words occurring frequently in a document cluster have a higher chance of being included in the summary; 4) Pointer-Generator Networks (PGN) BIBREF16 , a state-of-the-art neural encoder-decoder approach for abstractive summarization. The system was trained on the CNN/Daily Mail data sets BIBREF78 , BIBREF14 . 5) ILP BIBREF21 , a baseline ILP framework without matrix completion.
The Pointer-Generator Networks BIBREF16 describes a neural encoder-decoder architecture. It encourages the system to copy words from the source text via pointing, while retaining the ability to produce novel words through the generator. It also contains a coverage mechanism to keep track of what has been summarized, thus reducing word repetition. The pointer-generator networks have not been tested for summarizing content contributed by multiple authors. In this study we evaluate their performance on our collection of datasets.
For the ILP-based approaches, we use bigrams as concepts (bigrams consisting of only stopwords are removed) and term frequency as concept weights. We leverage the co-occurrence statistics both within and across the entire corpus. We also filtered out bigrams that appear only once in each corpus, yielding better ROUGE scores with lower computational cost. The results without using this low-frequency filtering are shown in the Appendix for comparison. In Table TABREF26 , we present summarization results evaluated by ROUGE BIBREF72 and human judges.
To compare with the official participants in DUC 2004 BIBREF79 , we selected the top-5 systems submitted in the competition (ranked by R-1), together with the 8 human annotators. The results are presented in Table TABREF27 .
ROUGE. It is a recall-oriented metric that compares system and reference summaries based on n-gram overlaps, which is widely used in summarization evaluation. In this work, we report ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-SU4 (R-SU4), and ROUGE-L (R-L) scores, which respectively measure the overlap of unigrams, bigrams, skip-bigram (with a maximum gap length of 4), and longest common subsequence. First, there is no winner for all data sets. MEAD is the best one on camera; SumBasic is best on Stat2016 and mostly on Stat2015; ILP is best on DUC04. The ILP baseline is comparable to the best participant (Table TABREF27 ) and even has the best R-2. PGN is the worst, which is not surprising since it is trained on a different data set, which may not generalize to our data sets. Our method ILP+MC is best on peer review and mostly on Eng and CS2016. Second, compared with ILP, our method works better on Eng, CS2016, movie, and peer.
These results show our proposed method does not always better than the ILP framework, and no single summarization system wins on all data sets. It is perhaps not surprising to some extent. The no free lunch theorem for machine learning BIBREF80 states that, averaged overall possible data-generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other BIBREF81 .
Human Evaluation. Because ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries, we further perform a human evaluation. For each task, we present a pair of system outputs in a random order, together with one human summary to five Amazon turkers. If there are multiple human summaries, we will present each human summary and the pair of system outputs to turkers. For student responses, we also present the prompt. An example Human Intelligence Task (HIT) is illustrated in Fig. FIGREF32 .
The turkers are asked to indicate their preference for system A or B based on the semantic resemblance to the human summary on a 5-Likert scale (`Strongly preferred A', `Slightly preferred A', `No preference', `Slightly preferred B', `Strongly preferred B'). They are rewarded $0.04 per task. We use two strategies to control the quality of the human evaluation. First, we require the turkers to have a HIT approval rate of 90% or above. Second, we insert some quality checkpoints by asking the turkers to compare two summaries of same text content but in different sentence orders. Turkers who did not pass these tests are filtered out. Due to budget constraints, we conduct pairwise comparisons for three systems. The total number of comparisons is 3 system-system pairs INLINEFORM0 5 turkers INLINEFORM1 (36 tasks INLINEFORM2 1 human summaries for Eng + 44 INLINEFORM3 2 for Stat2015 + 48 INLINEFORM4 2 for Stat2016 + 46 INLINEFORM5 2 for CS2016 + 3 INLINEFORM6 8 for camera + 3 INLINEFORM7 5 for movie + 3 INLINEFORM8 2 for peer + 50 INLINEFORM9 4 for DUC04) = 8,355. The number of tasks for each corpus is shown in Table TABREF14 . To elaborate as an example, for Stat2015, there are 22 lectures and 2 prompts for each lecture. Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary. Therefore, we have 44 INLINEFORM11 2 INLINEFORM12 3=264 HITs for Stat2015. Each HIT will be done by 5 different turkers, resulting in 264 INLINEFORM13 5=1,320 comparisons. In total, 306 unique turkers were recruited and on average 27.3 of HITs were completed by one turker. The distribution of the human preference scores is shown in Fig. FIGREF34 .
We calculate the percentage of “wins” (strong or slight preference) for each system among all comparisons with its counterparts. Results are reported in the last column of Table TABREF26 . ILP+MC is preferred significantly more often than ILP on Stat2015, CS2016, and DUC04. There is no significant difference between ILP+MC and SumBasic on student response data sets. Interestingly, a system with better ROUGE scores does not necessarily mean it is more preferred by humans. For example, ILP is preferred more on all three review data sets. Regarding the inter-annotator agreement, we find 48.5% of the individual judgements agree with the majority votes. The agreement scores decomposed by data sets and system pairs are shown in Table TABREF35 . Overall, the agreement scores are pretty low, compared to an agreement score achieved by randomly clicking (45.7%). It has several possibilities. The first one is that many turkers did click randomly (39 out of 160 failed our quality checkpoints). Unfortunately, we did not check all the turkers as we inserted the checkpoints randomly. The second possibility is that comparing two system summaries is difficult for humans, and thus it has a low agreement score. Xiong and Litman xiong-litman:2014:Coling also found that it is hard to make humans agree on the choice of summary sentences. A third possibility is that turkers needed to see the raw input sentences which are not shown in a HIT.
An interesting observation is that our approach produces summaries with more sentences, as shown in Table TABREF39 . The number of words in the summaries is approximately the same for all methods for a particular corpus, which is constrained by Eq. . For camera, movie and peer reviews, the number of sentences in human summary is 10, and SumBasic and ILP+MC produce more sentences than ILP. It is hard for people to judge which system summaries is closer to a human summary when the summaries are long (216, 242, and 190 words for camera, movie, and peer reviews respectively). For inter-annotator agreement, 50.3% of judgements agree with the majority votes for student response data sets, 47.6% for reviews, and only 46.3% for news documents. We hypothesize that for these long summaries, people may prefer short system summaries, and for short summaries, people may prefer long system summaries. We leave the examination of this finding to future work.
Table TABREF40 presents example system outputs. This offers an intuitive understanding of our proposed approach.
Analysis of Influential Factors
In this section, we want to investigate the impact of the low-rank approximation process to the ILP framework. Therefore, in the following experiments, we focus on the direct comparison with the ILP and ILP+MC and leave the comparison to other baselines as future work. The proposed method achieved better summarization performance on Eng, CS2016, movie, and peer than the ILP baseline. Unfortunately, it does not work as expected on two courses for student responses (Stat2015 and Stat2016), review camera and news documents. This leaves the research question when and why the proposed method works better. In order to investigate what are key factors that impact the performance, we would like to perform additional experiments using synthesized data sets.
A variety of attributes that might impact the performance are summarized in Table TABREF41 , categorized into two types. The input attributes are extracted from the input original documents and the summaries attributes are extracted from human summaries and the input documents as well. Here are some important attributes we expect to have a big impact on the performance.
The attributes extracted from the corpora are shown in Table TABREF42 . Note, a bigram that appears more often in original documents has a better chance to be included in human summaries as indicated by INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 . This verifies our choice to cut low-frequency bigrams.
According to the ROUGE scores, our method works better on Eng, CS2016, movie, and peer (Table TABREF26 ). If we group each attribute into two groups, corresponding to whether ILP+MC works better, we do not find significant differences among these attributes. To further understand which factors impact the performance and have more predictive power, we train a binary classification decision tree by treating the 4 working corpora as positive examples and the remaining 4 as negative examples.
According to the decision tree model, there is only one decision point in the tree: INLINEFORM0 , the ratio of bigrams in human summaries that are in the input only once. Generally, our proposed method works if INLINEFORM1 , except for camera. When INLINEFORM2 is low, it means that annotators either adopt concepts that appear multiple times or just use their own. In this case, the frequency-based weighting (i.e., INLINEFORM3 in Eq. EQREF5 ) can capture the concepts that appear multiple times. On the other hand, when INLINEFORM4 is high, it means that a big number of bigrams appeared only once in the input document. In this case, annotators have difficulty selecting a representative one due to the ambiguous choice. Therefore, we hypothesize,
To test the predictive power of this attribute, we want to test it on new data sets. Unfortunately, creating new data sets with gold-standard human summaries is expensive and time-consuming, and the new data set may not have the desired property within a certain range of INLINEFORM0 . Therefore, we propose to manipulate the ratio and create new data sets using the existing data sets without additional human annotation. INLINEFORM1 can be represented as follows, DISPLAYFORM0
where INLINEFORM0 INLINEFORM1
There are two different ways to control the ratio, both involving removing input sentences with certain constraints.
In this way, we obtained different levels of INLINEFORM0 by deleting sentences. The ROUGE scores on the synthesized corpus are shown in Table TABREF52 .
Our hypothesis H2 is partially valid. When increasing the ratio, ILP+MC has a relative advantage gain over ILP. For example, for Stat2015, ILP+MC is not significantly worse than ILP any more when increasing the ratio from 11.9 to 18.1. For camera, ILP+MC becomes better than ILP when increasing the ratio from 84.9 to 85.8. For Stat2016, CS2016, Eng, more improvements or significant improvements can be found for ILP+MC compared to ILP when increasing the ratio. However, for movie and peer review, ILP+MC is worse than ILP when increasing the ratio.
We have investigated a number of attributes that might impact the performance of our proposed method. Unfortunately, we do not have a conclusive answer when our method works better. However, we would like to share some thoughts about it.
First, our proposed method works better on two student responses courses (Eng and CS2016), but not the other two (Stat2015 and Stat2016). An important factor we ignored is that the students from the other two courses are not native English speakers, resulting in significantly shorter responses (4.3 INLINEFORM0 6.0 INLINEFORM1 8.8, 9.1, INLINEFORM2 , Table TABREF42 , the row with id=11). With shorter sentences, there will be less context to leverage the low-rank approximation.
Second, our proposed method works better on movie and peer reviews, but not camera reviews. As pointed out by Xiong xiong2015helpfulness, both movie reviews and peer reviews are potentially more complicated than the camera reviews, as the review content consists of both the reviewer's evaluations of the subject (e.g., a movie or paper) and the reviewer's references of the subject, where the subject itself is full of content (e.g., movie plot, papers). In contrast, such references in product reviews are usually the mentions of product components or properties, which have limited variations. This characteristic makes review summarization more challenging in these two domains.
Conclusion
We made the first effort to summarize student feedback using an Integer Linear Programming framework with a low-rank matrix approximation, and applied it to different types of data sets including news articles, product, and peer reviews. Our approach allows sentences to share co-occurrence statistics and alleviates sparsity issue. Our experiments showed that the proposed approach performs better against a range of baselines on the student response Eng and CS2016 on ROUGE scores, but not other courses.
ROUGE is often adopted in research papers to evaluate the quality of summarization because it is fast and is correlated well to human evaluation BIBREF72 , BIBREF82 . However, it is also criticized that ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries. Different alternatives have been proposed to enhance ROUGE. For example, Graham rankel2016statistical proposed to use content-oriented features in conjunction with linguistic features. Similarly, Cohan and Goharian COHAN16.1144 proposed to use content relevance. At the same time, many researchers supplement ROUGE with a manual evaluation. This is why we conduct evaluations using both ROUGE and human evaluation in this work.
However, we found that a system with better ROUGE scores does not necessarily mean it is more preferred by humans (§ SECREF28 ). For example, ILP is preferred more on all three review data sets even if it got lower ROUGE scores than the other systems. It coincides with the fact that the ILP generated shorter summaries in terms of the number of sentences than the other two systems (Table TABREF39 ).
We also investigated a variety of attributes that might impact the performance on a range of data sets. Unfortunately, we did not have a conclusive answer when our method will work better.
In the future, we would like to conduct a large-scale intrinsic evaluation to examine whether the low-rank matrix approximation captures similar bigrams or not and want to investigate more attributes, such as new metrics for diversity. We would like to explore the opportunities by combing a vector sentence representation learned by a neural network and the ILP framework. | One model per topic. |
caf9819be516d2c5a7bfafc80882b07517752dfa | caf9819be516d2c5a7bfafc80882b07517752dfa_0 | Q: Do they quantitavely or qualitatively evalute the output of their low-rank approximation to verify the grouping of lexical items?
Text: Introduction
Summarization is a promising technique for reducing information overload. It aims at converting long text documents to short, concise summaries conveying the essential content of the source documents BIBREF0 . Extractive methods focus on selecting important sentences from the source and concatenating them to form a summary, whereas abstractive methods can involve a number of high-level text operations such as word reordering, paraphrasing, and generalization BIBREF1 . To date, summarization has been successfully exploited for a number of text domains, including news articles BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , product reviews BIBREF6 , online forum threads BIBREF7 , meeting transcripts BIBREF8 , scientific articles BIBREF9 , BIBREF10 , student course responses BIBREF11 , BIBREF12 , and many others.
Summarizing content contributed by multiple authors is particularly challenging. This is partly because people tend to use different expressions to convey the same semantic meaning. In a recent study of summarizing student responses to post-class reflective questions, Luo et al., Luo:2016:NAACL observe that the students use distinct lexical items such as “bike elements” and “bicycle parts” to refer to the same concept. The student responses frequently contain expressions with little or no word overlap, such as “the main topics of this course” and “what we will learn in this class,” when they are prompted with “describe what you found most interesting in today's class.” A similar phenomenon has also been observed in the news domain, where reporters use different nicknames, e.g., “Bronx Zoo” and “New York Highlanders,” to refer to the baseball team “New York Yankees.” Luo et al., Luo:2016:NAACL report that about 80% of the document bigrams occur only once or twice for the news domain, whereas the ratio is 97% for student responses, suggesting the latter domain has a higher level of lexical diversity. When source documents contain diverse expressions conveying the same meaning, it can hinder the summarization system's ability to effectively identify salient content from the source documents. It can also increase the summary redundancy if lexically-distinct but semantically-similar expressions are included in the summary.
Existing neural encoder-decoder models may not work well at summarizing such content with high lexical variety BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . On one hand, training the neural sequence-to-sequence models requires a large amount of parallel data. The cost of annotating gold-standard summaries for many domains such as student responses can be prohibitive. Without sufficient labelled data, the models can only be trained on automatically gathered corpora, where an instance often includes a news article paired with its title or a few highlights. On the other hand, the summaries produced by existing neural encoder-decoder models are far from perfect. The summaries are mostly extractive with minor edits BIBREF16 , contain repetitive words and phrases BIBREF17 and may not accurately reproduce factual details BIBREF18 , BIBREF19 . We examine the performance of a state-of-the-art neural summarization model in Section § SECREF28 .
In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occurrence matrix, and further evaluate the approach on a broad range of datasets exhibiting high lexical diversity. The ILP framework, being extractive in nature, has demonstrated considerable success on a number of summarization tasks BIBREF20 , BIBREF21 . It generates a summary by selecting a set of sentences from the source documents. The sentences shall maximize the coverage of important source content, while minimizing the redundancy among themselves. At the heart of the algorithm is a sentence-concept co-occurrence matrix, used to determine if a sentence contains important concepts and whether two sentences share the same concepts. We introduce a low-rank approximation to the co-occurrence matrix and optimize it using the proximal gradient method. The resulting system thus allows different sentences to share co-occurrence statistics. For example, “The activity with the bicycle parts" will be allowed to partially contain “bike elements" although the latter phrase does not appear in the sentence. The low-rank matrix approximation provides an effective way to implicitly group lexically-diverse but semantically-similar expressions. It can handle out-of-vocabulary expressions and domain-specific terminologies well, hence being a more principled approach than heuristically calculating similarities of word embeddings.
Our research contributions of this work include the following.
In the following sections we first present a thorough review of the related work (§ SECREF2 ), then introduce our ILP summarization framework (§ SECREF3 ) with a low-rank approximation of the co-occurrence matrix optimized using the proximal gradient method (§ SECREF4 ). Experiments are performed on a collection of eight datasets (§ SECREF5 ) containing student responses to post-class reflective questions, product reviews, peer reviews, and news articles. Intrinsic evaluation (§ SECREF20 ) shows that the low-rank approximation algorithm can effectively group distinct expressions used in similar semantic context. For extrinsic evaluation (§ SECREF28 ) our proposed framework obtains competitive results in comparison to state-of-the-art summarization systems. Finally, we conduct comprehensive studies analyzing the characteristics of the datasets and suggest critical factors that affect the summarization performance (§ SECREF7 ).
Related Work
Extractive summarization has undergone great development over the past decades. It focuses on extracting relevant sentences from a single document or a cluster of documents related to a particular topic. Various techniques have been explored, including maximal marginal relevance BIBREF22 , submodularity BIBREF23 , integer linear programming BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF4 , minimizing reconstruction error BIBREF28 , graph-based models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , determinantal point processes BIBREF33 , neural networks and reinforcement learning BIBREF34 , BIBREF35 among others. Nonetheless, most studies are bound to a single dataset and few approaches have been evaluated in a cross-domain setting. In this paper, we propose an enhanced ILP framework and evaluate it on a broad range of datasets. We present an in-depth analysis of the dataset characteristics derived from both source documents and reference summaries to understand how domain-specific factors may affect the applicability of the proposed approach.
Neural summarization has seen promising improvements in recent years with encoder-decoder models BIBREF13 , BIBREF14 . The encoder condenses the source text to a dense vector, whereas the decoder unrolls the vector to a summary sequence by predicting one word at a time. A number of studies have been proposed to deal with out-of-vocabulary words BIBREF16 , improve the attention mechanism BIBREF36 , BIBREF37 , BIBREF38 , avoid generating repetitive words BIBREF16 , BIBREF17 , adjust summary length BIBREF39 , encode long text BIBREF40 , BIBREF41 and improve the training objective BIBREF42 , BIBREF15 , BIBREF43 . To date, these studies focus primarily on single-document summarization and headline generation. This is partly because training neural encoder-decoder models requires a large amount of parallel data, yet the cost of annotating gold-standard summaries for most domains can be prohibitive. We validate the effectiveness of a state-of-the-art neural summarization system BIBREF16 on our collection of datasets and report results in § SECREF28 .
In this paper we focus on the integer linear programming-based summarization framework and propose enhancements to it to summarize text content with high lexical diversity. The ILP framework is shown to perform strongly on extractive summarization BIBREF20 , BIBREF44 , BIBREF21 . It produces an optimal selection of sentences that (i) maximize the coverage of important concepts discussed in the source, (ii) minimize the redundancy in pairs of selected sentences, and (iii) ensure the summary length does not exceed a limit. Previous work has largely focused on improving the estimation of concept weights in the ILP framework BIBREF45 , BIBREF46 , BIBREF47 , BIBREF48 , BIBREF4 . However, distinct lexical items such as “bike elements” and “bicycle parts” are treated as different concepts and their weights are not shared. In this paper we overcome this issue by proposing a low-rank approximation to the sentence-concept co-occurrence matrix to intrinsically group lexically-distinct but semantically-similar expressions; they are considered as a whole when maximizing concept coverage and minimizing redundancy.
Our work is also different from the traditional approaches using dimensionality reduction techniques such as non-negative matrix factorization (NNMF) and latent semantic analysis (LSA) for summarization BIBREF49 , BIBREF50 , BIBREF51 , BIBREF52 , BIBREF53 . In particular, Wang et al. wang2008multi use NNMF to group sentences into clusters; Conroy et al. conroy-EtAl:2013:MultiLing explore NNMF and LSA to obtain better estimates of term weights; Wang et al. wang2016low use low-rank approximation to cast sentences and images to the same embedding space. Different from the above methods, our proposed framework focuses on obtaining a low-rank approximation of the co-occurrence matrix embedded in the ILP framework, so that diverse expressions can share co-occurrence frequencies. Note that out-of-vocabulary expressions and domain-specific terminologies are abundant in our datasets, therefore simply calculating the lexical overlap BIBREF54 or cosine similarity of word embeddings BIBREF55 cannot serve our goal well.
This manuscript extends our previous work on summarizing student course responses BIBREF11 , BIBREF56 , BIBREF12 submitted after each lecture via a mobile app named CourseMIRROR BIBREF57 , BIBREF58 , BIBREF59 . The students are asked to respond to reflective prompts such as “describe what you found most interesting in today's class” and “describe what was confusing or needed more detail.” For large classes with hundreds of students, it can be quite difficult for instructors to manually analyze the student responses, hence the help of automatic summarization. Our extensions of this work are along three dimensions: (i) we crack the “black-box” of the low-rank approximation algorithm to understand if it indeed allows lexically-diverse but semantically-similar items to share co-occurrence statistics; (ii) we compare the ILP-based summarization framework with state-of-the-art baselines, including a popular neural encoder-decoder model for summarization; (iii) we expand the student feedback datasets to include responses collected from materials science and engineering, statistics for industrial engineers, and data structures. We additionally experiment with reviews and news articles. Analyzing the unique characteristics of each dataset allows us to identify crucial factors influencing the summarization performance.
With the fast development of Massive Open Online Courses (MOOC) platforms, more attention is being dedicated to analyzing educationally-oriented language data. These studies seek to identify student leaders from MOOC discussion forums BIBREF60 , perform sentiment analysis on student discussions BIBREF61 , improve student engagement and reducing student retention BIBREF62 , BIBREF63 , and using language generation techniques to automatically generate feedback to students BIBREF64 . Our focus of this paper is to automatically summarizing student responses so that instructors can collect feedback in a timely manner. We expect the developed summarization techniques and result analysis will further summarization research in similar text genres exhibiting high lexical variety.
ILP Formulation
Let INLINEFORM0 be a set of documents that consist of INLINEFORM1 sentences in total. Let INLINEFORM2 , INLINEFORM3 indicate if a sentence INLINEFORM4 is selected ( INLINEFORM5 ) or not ( INLINEFORM6 ) in the summary. Similarly, let INLINEFORM7 be the number of unique concepts in INLINEFORM8 . INLINEFORM9 , INLINEFORM10 indicate the appearance of concepts in the summary. Each concept INLINEFORM11 is assigned a weight of INLINEFORM12 , often measured by the number of sentences or documents that contain the concept. The ILP-based summarization approach BIBREF20 searches for an optimal assignment to the sentence and concept variables so that the selected summary sentences maximize coverage of important concepts. The relationship between concepts and sentences is captured by a co-occurrence matrix INLINEFORM13 , where INLINEFORM14 indicates the INLINEFORM15 -th concept appears in the INLINEFORM16 -th sentence, and INLINEFORM17 otherwise. In the literature, bigrams are frequently used as a surrogate for concepts BIBREF24 , BIBREF21 . We follow the convention and use `concept' and `bigram' interchangeably in this paper.
Two sets of linear constraints are specified to ensure the ILP validity: (1) a concept is selected if and only if at least one sentence carrying it has been selected (Eq. ), and (2) all concepts in a sentence will be selected if that sentence is selected (Eq. ). Finally, the selected summary sentences are allowed to contain a total of INLINEFORM0 words or less (Eq. ). DISPLAYFORM0
The above ILP can be transformed to matrix representation: DISPLAYFORM0
We use boldface letters to represent vectors and matrices. INLINEFORM0 is an auxiliary matrix created by horizontally stacking the concept vector INLINEFORM1 INLINEFORM2 times. Constraint set (Eq. ) specifies that a sentence is selected indicates that all concepts it carries have been selected. It corresponds to INLINEFORM3 constraints of the form INLINEFORM4 , where INLINEFORM5 .
As far as we know, this is the first-of-its-kind matrix representation of the ILP framework. It clearly shows the two important components of this framework, including 1) the concept-sentence co-occurrence matrix INLINEFORM0 , and 2) concept weight vector INLINEFORM1 . Existing work focus mainly on generating better estimates of concept weights ( INLINEFORM2 ), while we focus on improving the co-occurrence matrix INLINEFORM3 .
Our Approach
Because of the lexical diversity problem, we suspect the co-occurrence matrix INLINEFORM0 may not establish a faithful correspondence between sentences and concepts. A concept may be conveyed using multiple bigram expressions; however, the current co-occurrence matrix only captures a binary relationship between sentences and bigrams. For example, we ought to give partial credit to “bicycle parts” given that a similar expression “bike elements” appears in the sentence. Domain-specific synonyms may be captured as well. For example, the sentence “I tried to follow along but I couldn't grasp the concepts” is expected to partially contain the concept “understand the”, although the latter did not appear in the sentence.
The existing matrix INLINEFORM0 is highly sparse. Only 3.7% of the entries are non-zero in the student response data sets on average (§ SECREF5 ). We therefore propose to impute the co-occurrence matrix by filling in missing values (i.e., matrix completion). This is accomplished by approximating the original co-occurrence matrix using a low-rank matrix. The low-rankness encourages similar concepts to be shared across sentences.
The ILP with a low-rank approximation of the co-occurrence matrix can be formalized as follows. DISPLAYFORM0
The low-rank approximation process makes two notable changes to the existing ILP framework.
Concretely, given the co-occurrence matrix INLINEFORM0 , we aim to find a low-rank matrix INLINEFORM1 whose values are close to INLINEFORM2 at the observed positions. Our objective function is DISPLAYFORM0
where INLINEFORM0 represents the set of observed value positions. INLINEFORM1 denotes the trace norm of INLINEFORM2 , i.e., INLINEFORM3 , where INLINEFORM4 is the rank of INLINEFORM5 and INLINEFORM6 are the singular values. By defining the following projection operator INLINEFORM7 , DISPLAYFORM0
our objective function (Eq. EQREF10 ) can be succinctly represented as DISPLAYFORM0
where INLINEFORM0 denotes the Frobenius norm.
Following Mazumder et al. Mazumder:2010, we optimize Eq. EQREF12 using the proximal gradient descent algorithm. The update rule is DISPLAYFORM0
where INLINEFORM0 is the step size at iteration k and the proximal function INLINEFORM1 is defined as the singular value soft-thresholding operator, INLINEFORM2 , where INLINEFORM3 is the singular value decomposition (SVD) of INLINEFORM4 and INLINEFORM5 .
Since the gradient of INLINEFORM0 is Lipschitz continuous with INLINEFORM1 ( INLINEFORM2 is the Lipschitz continuous constant), we follow Mazumder et al. Mazumder:2010 to choose fixed step size INLINEFORM3 , which has a provable convergence rate of INLINEFORM4 , where INLINEFORM5 is the number of iterations.
Datasets
To demonstrate the generality of the proposed approach, we consider three distinct types of corpora, ranging from student response data sets from four different courses to three sets of reviews to one benchmark of news articles. The corpora are summarized in Table TABREF14 .
Student responses. Research has explored using reflection prompts/muddy cards/one-minute papers to promote and collect reflections from students BIBREF65 , BIBREF66 , BIBREF67 . However, it is expensive and time consuming for humans to summarize such feedback. It is therefore desirable to automatically summarize the student feedback produced in online and offline environments, although it is only recently that a data collection effort to support such research has been initiated BIBREF58 , BIBREF57 . In our data, one particular type of student response is considered, named “reflective feedback” BIBREF68 , which has been shown to enhance interaction between instructors and students by educational researchers BIBREF69 , BIBREF70 . More specifically, students are presented with the following prompts after each lecture and asked to provide responses: 1) “describe what you found most interesting in today's class,” 2) “describe what was confusing or needed more detail,” and 3) “describe what you learned about how you learn.” These open-ended prompts are carefully designed to encourage students to self-reflect, allowing them to “recapture experience, think about it and evaluate it" BIBREF68 .
To test generality, we gathered student responses from four different courses, as shown in Table TABREF14 . The first one was collected by Menekse et al. Menekse:2011 using paper-based surveys from an introductory materials science and engineering class (henceforth Eng) taught in a major U.S. university, and a subset is made public by us BIBREF11 , available at the link: http://www.coursemirror.com/download/dataset. The remaining three courses are collected by us using a mobile application, CourseMIRROR BIBREF57 , BIBREF58 and then the reference summaries for each course are created by human annotators with the proper background. The human annotators are allowed to create abstract summaries using their own words in addition to selecting phrases directly from the responses. While the 2nd and 3rd data sets are from the same course, Statistics for Industrial Engineers, they were taught in 2015 and 2016 respectively (henceforth Stat2015 and Stat2016), at the Boǧaziçi University in Turkey. The course was taught in English while the official language is Turkish. The last one is from a fundamental undergraduate Computer Science course (data structures) at a local U.S. university taught in 2016 (henceforth CS2016).
Another reason we choose the student responses is that we have advanced annotation allowing us to perform an intrinsic evaluation to test whether the low-rank approximation does capture similar concepts or not. An example of the annotation is shown in Table TABREF15 , where phrases in the student responses that are semantically the same as the summary phrases are highlighted with the same color by human annotators. For example, “error bounding" (S2), “error boundary" (S4), “finding that error" (S3), and “determining the critical value for error" (S7) are semantically equivalent to “Error bounding" in the human summary. Details of the intrinsic evaluation are introduced in SECREF20 .
Product and peer reviews. The review data sets are provided by Xiong and Litman xiong-litman:2014:Coling, consisting of 3 categories. The first one is a subset of product reviews from a widely used data set in review opinion mining and sentiment analysis, contributed by Jindal and Liu jindal2008opinion. In particular, it randomly sampled 3 set of reviews from a representative product (digital camera), each with 18 reviews from an individual product type (e.g. “summarizing 18 camera reviews for Nikon D3200"). The second one is movie reviews crawled from IMDB.com by the authors themselves. The third one is peer reviews collected in a college-level history class from an online peer-review reciprocal system, SWoRD BIBREF71 . The average number of sentences per review set is 85 for camera reviews, 328 for movie reviews and 80 for peer review; the average number of words per sentence in the camera, movie, and peer reviews are 23, 24 and 19, respectively. The human summaries were collected in the form of online surveys (one survey per domain) hosted by Qualtrics. Each human summary contains 10 sentences from users' reviews. Example movie reviews are shown in Table TABREF17 .
News articles. Most summarization work focuses on news documents, as driven by the Document Understanding Conferences (DUC) and Text Analysis Conferences (TAC). For comparison, we select DUC 2004 to evaluate our approach (henceforth DUC04), which is widely used in the literature BIBREF72 , BIBREF73 , BIBREF74 , BIBREF75 , BIBREF76 . It consists of 50 clusters of Text REtrieval Conference (TREC) documents, from the following collections: AP newswire, 1998-2000; New York Times newswire, 1998-2000; Xinhua News Agency (English version), 1996-2000. Each cluster contained on average 10 documents. The task is to create a short summary ( INLINEFORM0 665 bytes) of each cluster. Example news sentences are shown in Table TABREF19 .
Experiments
In this section, we evaluate the proposed method intrinsically in terms of whether the co-occurrence matrix after the low-rank approximation is able to capture similar concepts on student response data sets, and also extrinsically in terms of the end task of summarization on all corpora. In the following experiments, summary length is set to be the average number of words in human summaries or less. For the matrix completion algorithm, we perform grid search (on a scale of [0, 5] with stepsize 0.5) to tune the hyper-parameter INLINEFORM0 (Eq. EQREF10 ) with a leave-one-lecture-out (for student responses) or leave-one-task-out (for others) cross-validation.
Intrinsic evaluation
When examining the imputed sentence-concept co-occurrence matrix, we notice some interesting examples that indicate the effectiveness of the proposed approach, shown in Table TABREF21 .
We want to investigate whether the matrix completion (MC) helps to capture similar concepts (i.e., bigrams). Recall that, if a bigram INLINEFORM0 is similar to another bigram in a sentence INLINEFORM1 , the sentence INLINEFORM2 should assign a partial score to the bigram INLINEFORM3 after the low-rank approximation. For instance, “The activity with the bicycle parts" should give a partial score to “bike elements" since it is similar to “bicycle parts". Note that, the co-occurrence matrix INLINEFORM4 measures whether a sentence includes a bigram or not. Without matrix completion, if a bigram INLINEFORM5 does not appear in a sentence INLINEFORM6 , INLINEFORM7 . After matrix completion, INLINEFORM8 ( INLINEFORM9 is the low-rank approximation matrix of INLINEFORM10 ) becomes a continuous number ranging from 0 to 1 (negative values are truncated). Therefore, INLINEFORM11 does not necessarily mean the sentence contains a similar bigram, since it might also give positive scores to non-similar bigrams. To solve this issue, we propose two different ways to test whether the matrix completion really helps to capture similar concepts.
H1.a: A bigram receives a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. That is, if a bigram INLINEFORM0 is similar to one of bigrams in a sentence INLINEFORM1 , but not similar to any bigram in another sentence INLINEFORM2 , then after matrix completion, INLINEFORM3 .
H1.b: A sentence gives higher partial scores to bigrams that are similar to its own bigrams than bigrams that are different from its own. That is, if a sentence INLINEFORM0 has a bigram that is similar to INLINEFORM1 , but none of its bigrams is similar to INLINEFORM2 , then, after matrix completion, INLINEFORM3 .
In order to test these two hypotheses, we need to construct gold-standard pairs of similar bigrams and pairs of different bigrams, which can be automatically obtained with the phrase-highlighting data (Table TABREF15 ). We first extract a candidate bigram from a phrase if and only if a single bigram can be extracted from the phrase. In this way, we discard long phrases if there are multiple candidate bigrams among them in order to avoid ambiguity as we cannot validate which of them match another target bigram. A bigram is defined as two words and at least one of them is not a stop word. We then extract every pair of candidate bigrams that are highlighted in the same color as similar bigrams. Similarly, we extract every pair of candidate bigrams that are highlighted as different colors as different bigrams. For example, “bias reduction" is a candidate phrase, which is similar to “bias correction" since they are in the same color.
To test H1.a, given a bigram INLINEFORM0 , a bigram INLINEFORM1 that is similar to it, and a bigram INLINEFORM2 that is different from it, we can select the bigram INLINEFORM3 , and the sentence INLINEFORM4 that contains INLINEFORM5 , and the sentence INLINEFORM6 that contains INLINEFORM7 . We ignore INLINEFORM8 if it contains any other bigram that is similar to INLINEFORM9 to eliminate the compounded case that both similar and different bigrams are within one sentence. Note, if there are multiple sentences containing INLINEFORM10 , we consider each of them. In this way, we construct a triple INLINEFORM11 , and test whether INLINEFORM12 . To test H1.b, for each pair of similar bigrams INLINEFORM13 , and different bigrams INLINEFORM14 , we select the sentence INLINEFORM15 that contains INLINEFORM16 so that we construct a triple INLINEFORM17 , and test whether INLINEFORM18 . We also filtered out INLINEFORM19 that contains similar bigram(s) to INLINEFORM20 to remove the compounded effect. In this way, we collected a gold-standard data set to test the two hypotheses above as shown in Table TABREF24 .
The results are shown in Table TABREF25 . INLINEFORM0 significantly on all three courses. That is, a bigram does receive a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. Therefore, H1.a holds. For H1.b, we only observe INLINEFORM1 significantly on Stat2016 and there is no significant difference between INLINEFORM2 and INLINEFORM3 on the other two courses. First, the gold-standard data set is still small in the sense that only a limited portion of bigrams in the entire data set are evaluated. Second, the assumption that phrases annotated by different colors are not necessarily unrelated is too strong. For example, “hypothesis testing" and “H1 and Ho conditions" are in different colors in the example of Table TABREF15 , but one is a subtopic of the other. An alternative way to evaluate the hypothesis is to let humans judge whether two bigrams are similar or not, which we leave for future work. Third, the gold standards are pairs of semantically similar bigrams, while matrix completion captures bigrams that occurs in a similar context, which is not necessarily equivalent to semantic similarity. For example, the sentence “graphs make it easier to understand concepts" in Table TABREF25 is associated with “hard to".
Extrinsic evaluation
Our proposed approach is compared against a range of baselines. They are 1) MEAD BIBREF30 , a centroid-based summarization system that scores sentences based on length, centroid, and position; 2) LexRank BIBREF29 , a graph-based summarization approach based on eigenvector centrality; 3) SumBasic BIBREF77 , an approach that assumes words occurring frequently in a document cluster have a higher chance of being included in the summary; 4) Pointer-Generator Networks (PGN) BIBREF16 , a state-of-the-art neural encoder-decoder approach for abstractive summarization. The system was trained on the CNN/Daily Mail data sets BIBREF78 , BIBREF14 . 5) ILP BIBREF21 , a baseline ILP framework without matrix completion.
The Pointer-Generator Networks BIBREF16 describes a neural encoder-decoder architecture. It encourages the system to copy words from the source text via pointing, while retaining the ability to produce novel words through the generator. It also contains a coverage mechanism to keep track of what has been summarized, thus reducing word repetition. The pointer-generator networks have not been tested for summarizing content contributed by multiple authors. In this study we evaluate their performance on our collection of datasets.
For the ILP-based approaches, we use bigrams as concepts (bigrams consisting of only stopwords are removed) and term frequency as concept weights. We leverage the co-occurrence statistics both within and across the entire corpus. We also filtered out bigrams that appear only once in each corpus, yielding better ROUGE scores with lower computational cost. The results without using this low-frequency filtering are shown in the Appendix for comparison. In Table TABREF26 , we present summarization results evaluated by ROUGE BIBREF72 and human judges.
To compare with the official participants in DUC 2004 BIBREF79 , we selected the top-5 systems submitted in the competition (ranked by R-1), together with the 8 human annotators. The results are presented in Table TABREF27 .
ROUGE. It is a recall-oriented metric that compares system and reference summaries based on n-gram overlaps, which is widely used in summarization evaluation. In this work, we report ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-SU4 (R-SU4), and ROUGE-L (R-L) scores, which respectively measure the overlap of unigrams, bigrams, skip-bigram (with a maximum gap length of 4), and longest common subsequence. First, there is no winner for all data sets. MEAD is the best one on camera; SumBasic is best on Stat2016 and mostly on Stat2015; ILP is best on DUC04. The ILP baseline is comparable to the best participant (Table TABREF27 ) and even has the best R-2. PGN is the worst, which is not surprising since it is trained on a different data set, which may not generalize to our data sets. Our method ILP+MC is best on peer review and mostly on Eng and CS2016. Second, compared with ILP, our method works better on Eng, CS2016, movie, and peer.
These results show our proposed method does not always better than the ILP framework, and no single summarization system wins on all data sets. It is perhaps not surprising to some extent. The no free lunch theorem for machine learning BIBREF80 states that, averaged overall possible data-generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other BIBREF81 .
Human Evaluation. Because ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries, we further perform a human evaluation. For each task, we present a pair of system outputs in a random order, together with one human summary to five Amazon turkers. If there are multiple human summaries, we will present each human summary and the pair of system outputs to turkers. For student responses, we also present the prompt. An example Human Intelligence Task (HIT) is illustrated in Fig. FIGREF32 .
The turkers are asked to indicate their preference for system A or B based on the semantic resemblance to the human summary on a 5-Likert scale (`Strongly preferred A', `Slightly preferred A', `No preference', `Slightly preferred B', `Strongly preferred B'). They are rewarded $0.04 per task. We use two strategies to control the quality of the human evaluation. First, we require the turkers to have a HIT approval rate of 90% or above. Second, we insert some quality checkpoints by asking the turkers to compare two summaries of same text content but in different sentence orders. Turkers who did not pass these tests are filtered out. Due to budget constraints, we conduct pairwise comparisons for three systems. The total number of comparisons is 3 system-system pairs INLINEFORM0 5 turkers INLINEFORM1 (36 tasks INLINEFORM2 1 human summaries for Eng + 44 INLINEFORM3 2 for Stat2015 + 48 INLINEFORM4 2 for Stat2016 + 46 INLINEFORM5 2 for CS2016 + 3 INLINEFORM6 8 for camera + 3 INLINEFORM7 5 for movie + 3 INLINEFORM8 2 for peer + 50 INLINEFORM9 4 for DUC04) = 8,355. The number of tasks for each corpus is shown in Table TABREF14 . To elaborate as an example, for Stat2015, there are 22 lectures and 2 prompts for each lecture. Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary. Therefore, we have 44 INLINEFORM11 2 INLINEFORM12 3=264 HITs for Stat2015. Each HIT will be done by 5 different turkers, resulting in 264 INLINEFORM13 5=1,320 comparisons. In total, 306 unique turkers were recruited and on average 27.3 of HITs were completed by one turker. The distribution of the human preference scores is shown in Fig. FIGREF34 .
We calculate the percentage of “wins” (strong or slight preference) for each system among all comparisons with its counterparts. Results are reported in the last column of Table TABREF26 . ILP+MC is preferred significantly more often than ILP on Stat2015, CS2016, and DUC04. There is no significant difference between ILP+MC and SumBasic on student response data sets. Interestingly, a system with better ROUGE scores does not necessarily mean it is more preferred by humans. For example, ILP is preferred more on all three review data sets. Regarding the inter-annotator agreement, we find 48.5% of the individual judgements agree with the majority votes. The agreement scores decomposed by data sets and system pairs are shown in Table TABREF35 . Overall, the agreement scores are pretty low, compared to an agreement score achieved by randomly clicking (45.7%). It has several possibilities. The first one is that many turkers did click randomly (39 out of 160 failed our quality checkpoints). Unfortunately, we did not check all the turkers as we inserted the checkpoints randomly. The second possibility is that comparing two system summaries is difficult for humans, and thus it has a low agreement score. Xiong and Litman xiong-litman:2014:Coling also found that it is hard to make humans agree on the choice of summary sentences. A third possibility is that turkers needed to see the raw input sentences which are not shown in a HIT.
An interesting observation is that our approach produces summaries with more sentences, as shown in Table TABREF39 . The number of words in the summaries is approximately the same for all methods for a particular corpus, which is constrained by Eq. . For camera, movie and peer reviews, the number of sentences in human summary is 10, and SumBasic and ILP+MC produce more sentences than ILP. It is hard for people to judge which system summaries is closer to a human summary when the summaries are long (216, 242, and 190 words for camera, movie, and peer reviews respectively). For inter-annotator agreement, 50.3% of judgements agree with the majority votes for student response data sets, 47.6% for reviews, and only 46.3% for news documents. We hypothesize that for these long summaries, people may prefer short system summaries, and for short summaries, people may prefer long system summaries. We leave the examination of this finding to future work.
Table TABREF40 presents example system outputs. This offers an intuitive understanding of our proposed approach.
Analysis of Influential Factors
In this section, we want to investigate the impact of the low-rank approximation process to the ILP framework. Therefore, in the following experiments, we focus on the direct comparison with the ILP and ILP+MC and leave the comparison to other baselines as future work. The proposed method achieved better summarization performance on Eng, CS2016, movie, and peer than the ILP baseline. Unfortunately, it does not work as expected on two courses for student responses (Stat2015 and Stat2016), review camera and news documents. This leaves the research question when and why the proposed method works better. In order to investigate what are key factors that impact the performance, we would like to perform additional experiments using synthesized data sets.
A variety of attributes that might impact the performance are summarized in Table TABREF41 , categorized into two types. The input attributes are extracted from the input original documents and the summaries attributes are extracted from human summaries and the input documents as well. Here are some important attributes we expect to have a big impact on the performance.
The attributes extracted from the corpora are shown in Table TABREF42 . Note, a bigram that appears more often in original documents has a better chance to be included in human summaries as indicated by INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 . This verifies our choice to cut low-frequency bigrams.
According to the ROUGE scores, our method works better on Eng, CS2016, movie, and peer (Table TABREF26 ). If we group each attribute into two groups, corresponding to whether ILP+MC works better, we do not find significant differences among these attributes. To further understand which factors impact the performance and have more predictive power, we train a binary classification decision tree by treating the 4 working corpora as positive examples and the remaining 4 as negative examples.
According to the decision tree model, there is only one decision point in the tree: INLINEFORM0 , the ratio of bigrams in human summaries that are in the input only once. Generally, our proposed method works if INLINEFORM1 , except for camera. When INLINEFORM2 is low, it means that annotators either adopt concepts that appear multiple times or just use their own. In this case, the frequency-based weighting (i.e., INLINEFORM3 in Eq. EQREF5 ) can capture the concepts that appear multiple times. On the other hand, when INLINEFORM4 is high, it means that a big number of bigrams appeared only once in the input document. In this case, annotators have difficulty selecting a representative one due to the ambiguous choice. Therefore, we hypothesize,
To test the predictive power of this attribute, we want to test it on new data sets. Unfortunately, creating new data sets with gold-standard human summaries is expensive and time-consuming, and the new data set may not have the desired property within a certain range of INLINEFORM0 . Therefore, we propose to manipulate the ratio and create new data sets using the existing data sets without additional human annotation. INLINEFORM1 can be represented as follows, DISPLAYFORM0
where INLINEFORM0 INLINEFORM1
There are two different ways to control the ratio, both involving removing input sentences with certain constraints.
In this way, we obtained different levels of INLINEFORM0 by deleting sentences. The ROUGE scores on the synthesized corpus are shown in Table TABREF52 .
Our hypothesis H2 is partially valid. When increasing the ratio, ILP+MC has a relative advantage gain over ILP. For example, for Stat2015, ILP+MC is not significantly worse than ILP any more when increasing the ratio from 11.9 to 18.1. For camera, ILP+MC becomes better than ILP when increasing the ratio from 84.9 to 85.8. For Stat2016, CS2016, Eng, more improvements or significant improvements can be found for ILP+MC compared to ILP when increasing the ratio. However, for movie and peer review, ILP+MC is worse than ILP when increasing the ratio.
We have investigated a number of attributes that might impact the performance of our proposed method. Unfortunately, we do not have a conclusive answer when our method works better. However, we would like to share some thoughts about it.
First, our proposed method works better on two student responses courses (Eng and CS2016), but not the other two (Stat2015 and Stat2016). An important factor we ignored is that the students from the other two courses are not native English speakers, resulting in significantly shorter responses (4.3 INLINEFORM0 6.0 INLINEFORM1 8.8, 9.1, INLINEFORM2 , Table TABREF42 , the row with id=11). With shorter sentences, there will be less context to leverage the low-rank approximation.
Second, our proposed method works better on movie and peer reviews, but not camera reviews. As pointed out by Xiong xiong2015helpfulness, both movie reviews and peer reviews are potentially more complicated than the camera reviews, as the review content consists of both the reviewer's evaluations of the subject (e.g., a movie or paper) and the reviewer's references of the subject, where the subject itself is full of content (e.g., movie plot, papers). In contrast, such references in product reviews are usually the mentions of product components or properties, which have limited variations. This characteristic makes review summarization more challenging in these two domains.
Conclusion
We made the first effort to summarize student feedback using an Integer Linear Programming framework with a low-rank matrix approximation, and applied it to different types of data sets including news articles, product, and peer reviews. Our approach allows sentences to share co-occurrence statistics and alleviates sparsity issue. Our experiments showed that the proposed approach performs better against a range of baselines on the student response Eng and CS2016 on ROUGE scores, but not other courses.
ROUGE is often adopted in research papers to evaluate the quality of summarization because it is fast and is correlated well to human evaluation BIBREF72 , BIBREF82 . However, it is also criticized that ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries. Different alternatives have been proposed to enhance ROUGE. For example, Graham rankel2016statistical proposed to use content-oriented features in conjunction with linguistic features. Similarly, Cohan and Goharian COHAN16.1144 proposed to use content relevance. At the same time, many researchers supplement ROUGE with a manual evaluation. This is why we conduct evaluations using both ROUGE and human evaluation in this work.
However, we found that a system with better ROUGE scores does not necessarily mean it is more preferred by humans (§ SECREF28 ). For example, ILP is preferred more on all three review data sets even if it got lower ROUGE scores than the other systems. It coincides with the fact that the ILP generated shorter summaries in terms of the number of sentences than the other two systems (Table TABREF39 ).
We also investigated a variety of attributes that might impact the performance on a range of data sets. Unfortunately, we did not have a conclusive answer when our method will work better.
In the future, we would like to conduct a large-scale intrinsic evaluation to examine whether the low-rank matrix approximation captures similar bigrams or not and want to investigate more attributes, such as new metrics for diversity. We would like to explore the opportunities by combing a vector sentence representation learned by a neural network and the ILP framework. | They evaluate quantitatively. |
b1e90a546dc92e96b657fff5dad8e89f4ac6ed5e | b1e90a546dc92e96b657fff5dad8e89f4ac6ed5e_0 | Q: Do they evaluate their framework on content of low lexical variety?
Text: Introduction
Summarization is a promising technique for reducing information overload. It aims at converting long text documents to short, concise summaries conveying the essential content of the source documents BIBREF0 . Extractive methods focus on selecting important sentences from the source and concatenating them to form a summary, whereas abstractive methods can involve a number of high-level text operations such as word reordering, paraphrasing, and generalization BIBREF1 . To date, summarization has been successfully exploited for a number of text domains, including news articles BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , product reviews BIBREF6 , online forum threads BIBREF7 , meeting transcripts BIBREF8 , scientific articles BIBREF9 , BIBREF10 , student course responses BIBREF11 , BIBREF12 , and many others.
Summarizing content contributed by multiple authors is particularly challenging. This is partly because people tend to use different expressions to convey the same semantic meaning. In a recent study of summarizing student responses to post-class reflective questions, Luo et al., Luo:2016:NAACL observe that the students use distinct lexical items such as “bike elements” and “bicycle parts” to refer to the same concept. The student responses frequently contain expressions with little or no word overlap, such as “the main topics of this course” and “what we will learn in this class,” when they are prompted with “describe what you found most interesting in today's class.” A similar phenomenon has also been observed in the news domain, where reporters use different nicknames, e.g., “Bronx Zoo” and “New York Highlanders,” to refer to the baseball team “New York Yankees.” Luo et al., Luo:2016:NAACL report that about 80% of the document bigrams occur only once or twice for the news domain, whereas the ratio is 97% for student responses, suggesting the latter domain has a higher level of lexical diversity. When source documents contain diverse expressions conveying the same meaning, it can hinder the summarization system's ability to effectively identify salient content from the source documents. It can also increase the summary redundancy if lexically-distinct but semantically-similar expressions are included in the summary.
Existing neural encoder-decoder models may not work well at summarizing such content with high lexical variety BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . On one hand, training the neural sequence-to-sequence models requires a large amount of parallel data. The cost of annotating gold-standard summaries for many domains such as student responses can be prohibitive. Without sufficient labelled data, the models can only be trained on automatically gathered corpora, where an instance often includes a news article paired with its title or a few highlights. On the other hand, the summaries produced by existing neural encoder-decoder models are far from perfect. The summaries are mostly extractive with minor edits BIBREF16 , contain repetitive words and phrases BIBREF17 and may not accurately reproduce factual details BIBREF18 , BIBREF19 . We examine the performance of a state-of-the-art neural summarization model in Section § SECREF28 .
In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occurrence matrix, and further evaluate the approach on a broad range of datasets exhibiting high lexical diversity. The ILP framework, being extractive in nature, has demonstrated considerable success on a number of summarization tasks BIBREF20 , BIBREF21 . It generates a summary by selecting a set of sentences from the source documents. The sentences shall maximize the coverage of important source content, while minimizing the redundancy among themselves. At the heart of the algorithm is a sentence-concept co-occurrence matrix, used to determine if a sentence contains important concepts and whether two sentences share the same concepts. We introduce a low-rank approximation to the co-occurrence matrix and optimize it using the proximal gradient method. The resulting system thus allows different sentences to share co-occurrence statistics. For example, “The activity with the bicycle parts" will be allowed to partially contain “bike elements" although the latter phrase does not appear in the sentence. The low-rank matrix approximation provides an effective way to implicitly group lexically-diverse but semantically-similar expressions. It can handle out-of-vocabulary expressions and domain-specific terminologies well, hence being a more principled approach than heuristically calculating similarities of word embeddings.
Our research contributions of this work include the following.
In the following sections we first present a thorough review of the related work (§ SECREF2 ), then introduce our ILP summarization framework (§ SECREF3 ) with a low-rank approximation of the co-occurrence matrix optimized using the proximal gradient method (§ SECREF4 ). Experiments are performed on a collection of eight datasets (§ SECREF5 ) containing student responses to post-class reflective questions, product reviews, peer reviews, and news articles. Intrinsic evaluation (§ SECREF20 ) shows that the low-rank approximation algorithm can effectively group distinct expressions used in similar semantic context. For extrinsic evaluation (§ SECREF28 ) our proposed framework obtains competitive results in comparison to state-of-the-art summarization systems. Finally, we conduct comprehensive studies analyzing the characteristics of the datasets and suggest critical factors that affect the summarization performance (§ SECREF7 ).
Related Work
Extractive summarization has undergone great development over the past decades. It focuses on extracting relevant sentences from a single document or a cluster of documents related to a particular topic. Various techniques have been explored, including maximal marginal relevance BIBREF22 , submodularity BIBREF23 , integer linear programming BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF4 , minimizing reconstruction error BIBREF28 , graph-based models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , determinantal point processes BIBREF33 , neural networks and reinforcement learning BIBREF34 , BIBREF35 among others. Nonetheless, most studies are bound to a single dataset and few approaches have been evaluated in a cross-domain setting. In this paper, we propose an enhanced ILP framework and evaluate it on a broad range of datasets. We present an in-depth analysis of the dataset characteristics derived from both source documents and reference summaries to understand how domain-specific factors may affect the applicability of the proposed approach.
Neural summarization has seen promising improvements in recent years with encoder-decoder models BIBREF13 , BIBREF14 . The encoder condenses the source text to a dense vector, whereas the decoder unrolls the vector to a summary sequence by predicting one word at a time. A number of studies have been proposed to deal with out-of-vocabulary words BIBREF16 , improve the attention mechanism BIBREF36 , BIBREF37 , BIBREF38 , avoid generating repetitive words BIBREF16 , BIBREF17 , adjust summary length BIBREF39 , encode long text BIBREF40 , BIBREF41 and improve the training objective BIBREF42 , BIBREF15 , BIBREF43 . To date, these studies focus primarily on single-document summarization and headline generation. This is partly because training neural encoder-decoder models requires a large amount of parallel data, yet the cost of annotating gold-standard summaries for most domains can be prohibitive. We validate the effectiveness of a state-of-the-art neural summarization system BIBREF16 on our collection of datasets and report results in § SECREF28 .
In this paper we focus on the integer linear programming-based summarization framework and propose enhancements to it to summarize text content with high lexical diversity. The ILP framework is shown to perform strongly on extractive summarization BIBREF20 , BIBREF44 , BIBREF21 . It produces an optimal selection of sentences that (i) maximize the coverage of important concepts discussed in the source, (ii) minimize the redundancy in pairs of selected sentences, and (iii) ensure the summary length does not exceed a limit. Previous work has largely focused on improving the estimation of concept weights in the ILP framework BIBREF45 , BIBREF46 , BIBREF47 , BIBREF48 , BIBREF4 . However, distinct lexical items such as “bike elements” and “bicycle parts” are treated as different concepts and their weights are not shared. In this paper we overcome this issue by proposing a low-rank approximation to the sentence-concept co-occurrence matrix to intrinsically group lexically-distinct but semantically-similar expressions; they are considered as a whole when maximizing concept coverage and minimizing redundancy.
Our work is also different from the traditional approaches using dimensionality reduction techniques such as non-negative matrix factorization (NNMF) and latent semantic analysis (LSA) for summarization BIBREF49 , BIBREF50 , BIBREF51 , BIBREF52 , BIBREF53 . In particular, Wang et al. wang2008multi use NNMF to group sentences into clusters; Conroy et al. conroy-EtAl:2013:MultiLing explore NNMF and LSA to obtain better estimates of term weights; Wang et al. wang2016low use low-rank approximation to cast sentences and images to the same embedding space. Different from the above methods, our proposed framework focuses on obtaining a low-rank approximation of the co-occurrence matrix embedded in the ILP framework, so that diverse expressions can share co-occurrence frequencies. Note that out-of-vocabulary expressions and domain-specific terminologies are abundant in our datasets, therefore simply calculating the lexical overlap BIBREF54 or cosine similarity of word embeddings BIBREF55 cannot serve our goal well.
This manuscript extends our previous work on summarizing student course responses BIBREF11 , BIBREF56 , BIBREF12 submitted after each lecture via a mobile app named CourseMIRROR BIBREF57 , BIBREF58 , BIBREF59 . The students are asked to respond to reflective prompts such as “describe what you found most interesting in today's class” and “describe what was confusing or needed more detail.” For large classes with hundreds of students, it can be quite difficult for instructors to manually analyze the student responses, hence the help of automatic summarization. Our extensions of this work are along three dimensions: (i) we crack the “black-box” of the low-rank approximation algorithm to understand if it indeed allows lexically-diverse but semantically-similar items to share co-occurrence statistics; (ii) we compare the ILP-based summarization framework with state-of-the-art baselines, including a popular neural encoder-decoder model for summarization; (iii) we expand the student feedback datasets to include responses collected from materials science and engineering, statistics for industrial engineers, and data structures. We additionally experiment with reviews and news articles. Analyzing the unique characteristics of each dataset allows us to identify crucial factors influencing the summarization performance.
With the fast development of Massive Open Online Courses (MOOC) platforms, more attention is being dedicated to analyzing educationally-oriented language data. These studies seek to identify student leaders from MOOC discussion forums BIBREF60 , perform sentiment analysis on student discussions BIBREF61 , improve student engagement and reducing student retention BIBREF62 , BIBREF63 , and using language generation techniques to automatically generate feedback to students BIBREF64 . Our focus of this paper is to automatically summarizing student responses so that instructors can collect feedback in a timely manner. We expect the developed summarization techniques and result analysis will further summarization research in similar text genres exhibiting high lexical variety.
ILP Formulation
Let INLINEFORM0 be a set of documents that consist of INLINEFORM1 sentences in total. Let INLINEFORM2 , INLINEFORM3 indicate if a sentence INLINEFORM4 is selected ( INLINEFORM5 ) or not ( INLINEFORM6 ) in the summary. Similarly, let INLINEFORM7 be the number of unique concepts in INLINEFORM8 . INLINEFORM9 , INLINEFORM10 indicate the appearance of concepts in the summary. Each concept INLINEFORM11 is assigned a weight of INLINEFORM12 , often measured by the number of sentences or documents that contain the concept. The ILP-based summarization approach BIBREF20 searches for an optimal assignment to the sentence and concept variables so that the selected summary sentences maximize coverage of important concepts. The relationship between concepts and sentences is captured by a co-occurrence matrix INLINEFORM13 , where INLINEFORM14 indicates the INLINEFORM15 -th concept appears in the INLINEFORM16 -th sentence, and INLINEFORM17 otherwise. In the literature, bigrams are frequently used as a surrogate for concepts BIBREF24 , BIBREF21 . We follow the convention and use `concept' and `bigram' interchangeably in this paper.
Two sets of linear constraints are specified to ensure the ILP validity: (1) a concept is selected if and only if at least one sentence carrying it has been selected (Eq. ), and (2) all concepts in a sentence will be selected if that sentence is selected (Eq. ). Finally, the selected summary sentences are allowed to contain a total of INLINEFORM0 words or less (Eq. ). DISPLAYFORM0
The above ILP can be transformed to matrix representation: DISPLAYFORM0
We use boldface letters to represent vectors and matrices. INLINEFORM0 is an auxiliary matrix created by horizontally stacking the concept vector INLINEFORM1 INLINEFORM2 times. Constraint set (Eq. ) specifies that a sentence is selected indicates that all concepts it carries have been selected. It corresponds to INLINEFORM3 constraints of the form INLINEFORM4 , where INLINEFORM5 .
As far as we know, this is the first-of-its-kind matrix representation of the ILP framework. It clearly shows the two important components of this framework, including 1) the concept-sentence co-occurrence matrix INLINEFORM0 , and 2) concept weight vector INLINEFORM1 . Existing work focus mainly on generating better estimates of concept weights ( INLINEFORM2 ), while we focus on improving the co-occurrence matrix INLINEFORM3 .
Our Approach
Because of the lexical diversity problem, we suspect the co-occurrence matrix INLINEFORM0 may not establish a faithful correspondence between sentences and concepts. A concept may be conveyed using multiple bigram expressions; however, the current co-occurrence matrix only captures a binary relationship between sentences and bigrams. For example, we ought to give partial credit to “bicycle parts” given that a similar expression “bike elements” appears in the sentence. Domain-specific synonyms may be captured as well. For example, the sentence “I tried to follow along but I couldn't grasp the concepts” is expected to partially contain the concept “understand the”, although the latter did not appear in the sentence.
The existing matrix INLINEFORM0 is highly sparse. Only 3.7% of the entries are non-zero in the student response data sets on average (§ SECREF5 ). We therefore propose to impute the co-occurrence matrix by filling in missing values (i.e., matrix completion). This is accomplished by approximating the original co-occurrence matrix using a low-rank matrix. The low-rankness encourages similar concepts to be shared across sentences.
The ILP with a low-rank approximation of the co-occurrence matrix can be formalized as follows. DISPLAYFORM0
The low-rank approximation process makes two notable changes to the existing ILP framework.
Concretely, given the co-occurrence matrix INLINEFORM0 , we aim to find a low-rank matrix INLINEFORM1 whose values are close to INLINEFORM2 at the observed positions. Our objective function is DISPLAYFORM0
where INLINEFORM0 represents the set of observed value positions. INLINEFORM1 denotes the trace norm of INLINEFORM2 , i.e., INLINEFORM3 , where INLINEFORM4 is the rank of INLINEFORM5 and INLINEFORM6 are the singular values. By defining the following projection operator INLINEFORM7 , DISPLAYFORM0
our objective function (Eq. EQREF10 ) can be succinctly represented as DISPLAYFORM0
where INLINEFORM0 denotes the Frobenius norm.
Following Mazumder et al. Mazumder:2010, we optimize Eq. EQREF12 using the proximal gradient descent algorithm. The update rule is DISPLAYFORM0
where INLINEFORM0 is the step size at iteration k and the proximal function INLINEFORM1 is defined as the singular value soft-thresholding operator, INLINEFORM2 , where INLINEFORM3 is the singular value decomposition (SVD) of INLINEFORM4 and INLINEFORM5 .
Since the gradient of INLINEFORM0 is Lipschitz continuous with INLINEFORM1 ( INLINEFORM2 is the Lipschitz continuous constant), we follow Mazumder et al. Mazumder:2010 to choose fixed step size INLINEFORM3 , which has a provable convergence rate of INLINEFORM4 , where INLINEFORM5 is the number of iterations.
Datasets
To demonstrate the generality of the proposed approach, we consider three distinct types of corpora, ranging from student response data sets from four different courses to three sets of reviews to one benchmark of news articles. The corpora are summarized in Table TABREF14 .
Student responses. Research has explored using reflection prompts/muddy cards/one-minute papers to promote and collect reflections from students BIBREF65 , BIBREF66 , BIBREF67 . However, it is expensive and time consuming for humans to summarize such feedback. It is therefore desirable to automatically summarize the student feedback produced in online and offline environments, although it is only recently that a data collection effort to support such research has been initiated BIBREF58 , BIBREF57 . In our data, one particular type of student response is considered, named “reflective feedback” BIBREF68 , which has been shown to enhance interaction between instructors and students by educational researchers BIBREF69 , BIBREF70 . More specifically, students are presented with the following prompts after each lecture and asked to provide responses: 1) “describe what you found most interesting in today's class,” 2) “describe what was confusing or needed more detail,” and 3) “describe what you learned about how you learn.” These open-ended prompts are carefully designed to encourage students to self-reflect, allowing them to “recapture experience, think about it and evaluate it" BIBREF68 .
To test generality, we gathered student responses from four different courses, as shown in Table TABREF14 . The first one was collected by Menekse et al. Menekse:2011 using paper-based surveys from an introductory materials science and engineering class (henceforth Eng) taught in a major U.S. university, and a subset is made public by us BIBREF11 , available at the link: http://www.coursemirror.com/download/dataset. The remaining three courses are collected by us using a mobile application, CourseMIRROR BIBREF57 , BIBREF58 and then the reference summaries for each course are created by human annotators with the proper background. The human annotators are allowed to create abstract summaries using their own words in addition to selecting phrases directly from the responses. While the 2nd and 3rd data sets are from the same course, Statistics for Industrial Engineers, they were taught in 2015 and 2016 respectively (henceforth Stat2015 and Stat2016), at the Boǧaziçi University in Turkey. The course was taught in English while the official language is Turkish. The last one is from a fundamental undergraduate Computer Science course (data structures) at a local U.S. university taught in 2016 (henceforth CS2016).
Another reason we choose the student responses is that we have advanced annotation allowing us to perform an intrinsic evaluation to test whether the low-rank approximation does capture similar concepts or not. An example of the annotation is shown in Table TABREF15 , where phrases in the student responses that are semantically the same as the summary phrases are highlighted with the same color by human annotators. For example, “error bounding" (S2), “error boundary" (S4), “finding that error" (S3), and “determining the critical value for error" (S7) are semantically equivalent to “Error bounding" in the human summary. Details of the intrinsic evaluation are introduced in SECREF20 .
Product and peer reviews. The review data sets are provided by Xiong and Litman xiong-litman:2014:Coling, consisting of 3 categories. The first one is a subset of product reviews from a widely used data set in review opinion mining and sentiment analysis, contributed by Jindal and Liu jindal2008opinion. In particular, it randomly sampled 3 set of reviews from a representative product (digital camera), each with 18 reviews from an individual product type (e.g. “summarizing 18 camera reviews for Nikon D3200"). The second one is movie reviews crawled from IMDB.com by the authors themselves. The third one is peer reviews collected in a college-level history class from an online peer-review reciprocal system, SWoRD BIBREF71 . The average number of sentences per review set is 85 for camera reviews, 328 for movie reviews and 80 for peer review; the average number of words per sentence in the camera, movie, and peer reviews are 23, 24 and 19, respectively. The human summaries were collected in the form of online surveys (one survey per domain) hosted by Qualtrics. Each human summary contains 10 sentences from users' reviews. Example movie reviews are shown in Table TABREF17 .
News articles. Most summarization work focuses on news documents, as driven by the Document Understanding Conferences (DUC) and Text Analysis Conferences (TAC). For comparison, we select DUC 2004 to evaluate our approach (henceforth DUC04), which is widely used in the literature BIBREF72 , BIBREF73 , BIBREF74 , BIBREF75 , BIBREF76 . It consists of 50 clusters of Text REtrieval Conference (TREC) documents, from the following collections: AP newswire, 1998-2000; New York Times newswire, 1998-2000; Xinhua News Agency (English version), 1996-2000. Each cluster contained on average 10 documents. The task is to create a short summary ( INLINEFORM0 665 bytes) of each cluster. Example news sentences are shown in Table TABREF19 .
Experiments
In this section, we evaluate the proposed method intrinsically in terms of whether the co-occurrence matrix after the low-rank approximation is able to capture similar concepts on student response data sets, and also extrinsically in terms of the end task of summarization on all corpora. In the following experiments, summary length is set to be the average number of words in human summaries or less. For the matrix completion algorithm, we perform grid search (on a scale of [0, 5] with stepsize 0.5) to tune the hyper-parameter INLINEFORM0 (Eq. EQREF10 ) with a leave-one-lecture-out (for student responses) or leave-one-task-out (for others) cross-validation.
Intrinsic evaluation
When examining the imputed sentence-concept co-occurrence matrix, we notice some interesting examples that indicate the effectiveness of the proposed approach, shown in Table TABREF21 .
We want to investigate whether the matrix completion (MC) helps to capture similar concepts (i.e., bigrams). Recall that, if a bigram INLINEFORM0 is similar to another bigram in a sentence INLINEFORM1 , the sentence INLINEFORM2 should assign a partial score to the bigram INLINEFORM3 after the low-rank approximation. For instance, “The activity with the bicycle parts" should give a partial score to “bike elements" since it is similar to “bicycle parts". Note that, the co-occurrence matrix INLINEFORM4 measures whether a sentence includes a bigram or not. Without matrix completion, if a bigram INLINEFORM5 does not appear in a sentence INLINEFORM6 , INLINEFORM7 . After matrix completion, INLINEFORM8 ( INLINEFORM9 is the low-rank approximation matrix of INLINEFORM10 ) becomes a continuous number ranging from 0 to 1 (negative values are truncated). Therefore, INLINEFORM11 does not necessarily mean the sentence contains a similar bigram, since it might also give positive scores to non-similar bigrams. To solve this issue, we propose two different ways to test whether the matrix completion really helps to capture similar concepts.
H1.a: A bigram receives a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. That is, if a bigram INLINEFORM0 is similar to one of bigrams in a sentence INLINEFORM1 , but not similar to any bigram in another sentence INLINEFORM2 , then after matrix completion, INLINEFORM3 .
H1.b: A sentence gives higher partial scores to bigrams that are similar to its own bigrams than bigrams that are different from its own. That is, if a sentence INLINEFORM0 has a bigram that is similar to INLINEFORM1 , but none of its bigrams is similar to INLINEFORM2 , then, after matrix completion, INLINEFORM3 .
In order to test these two hypotheses, we need to construct gold-standard pairs of similar bigrams and pairs of different bigrams, which can be automatically obtained with the phrase-highlighting data (Table TABREF15 ). We first extract a candidate bigram from a phrase if and only if a single bigram can be extracted from the phrase. In this way, we discard long phrases if there are multiple candidate bigrams among them in order to avoid ambiguity as we cannot validate which of them match another target bigram. A bigram is defined as two words and at least one of them is not a stop word. We then extract every pair of candidate bigrams that are highlighted in the same color as similar bigrams. Similarly, we extract every pair of candidate bigrams that are highlighted as different colors as different bigrams. For example, “bias reduction" is a candidate phrase, which is similar to “bias correction" since they are in the same color.
To test H1.a, given a bigram INLINEFORM0 , a bigram INLINEFORM1 that is similar to it, and a bigram INLINEFORM2 that is different from it, we can select the bigram INLINEFORM3 , and the sentence INLINEFORM4 that contains INLINEFORM5 , and the sentence INLINEFORM6 that contains INLINEFORM7 . We ignore INLINEFORM8 if it contains any other bigram that is similar to INLINEFORM9 to eliminate the compounded case that both similar and different bigrams are within one sentence. Note, if there are multiple sentences containing INLINEFORM10 , we consider each of them. In this way, we construct a triple INLINEFORM11 , and test whether INLINEFORM12 . To test H1.b, for each pair of similar bigrams INLINEFORM13 , and different bigrams INLINEFORM14 , we select the sentence INLINEFORM15 that contains INLINEFORM16 so that we construct a triple INLINEFORM17 , and test whether INLINEFORM18 . We also filtered out INLINEFORM19 that contains similar bigram(s) to INLINEFORM20 to remove the compounded effect. In this way, we collected a gold-standard data set to test the two hypotheses above as shown in Table TABREF24 .
The results are shown in Table TABREF25 . INLINEFORM0 significantly on all three courses. That is, a bigram does receive a higher partial score in a sentence that contains similar bigram(s) to it than a sentence that does not. Therefore, H1.a holds. For H1.b, we only observe INLINEFORM1 significantly on Stat2016 and there is no significant difference between INLINEFORM2 and INLINEFORM3 on the other two courses. First, the gold-standard data set is still small in the sense that only a limited portion of bigrams in the entire data set are evaluated. Second, the assumption that phrases annotated by different colors are not necessarily unrelated is too strong. For example, “hypothesis testing" and “H1 and Ho conditions" are in different colors in the example of Table TABREF15 , but one is a subtopic of the other. An alternative way to evaluate the hypothesis is to let humans judge whether two bigrams are similar or not, which we leave for future work. Third, the gold standards are pairs of semantically similar bigrams, while matrix completion captures bigrams that occurs in a similar context, which is not necessarily equivalent to semantic similarity. For example, the sentence “graphs make it easier to understand concepts" in Table TABREF25 is associated with “hard to".
Extrinsic evaluation
Our proposed approach is compared against a range of baselines. They are 1) MEAD BIBREF30 , a centroid-based summarization system that scores sentences based on length, centroid, and position; 2) LexRank BIBREF29 , a graph-based summarization approach based on eigenvector centrality; 3) SumBasic BIBREF77 , an approach that assumes words occurring frequently in a document cluster have a higher chance of being included in the summary; 4) Pointer-Generator Networks (PGN) BIBREF16 , a state-of-the-art neural encoder-decoder approach for abstractive summarization. The system was trained on the CNN/Daily Mail data sets BIBREF78 , BIBREF14 . 5) ILP BIBREF21 , a baseline ILP framework without matrix completion.
The Pointer-Generator Networks BIBREF16 describes a neural encoder-decoder architecture. It encourages the system to copy words from the source text via pointing, while retaining the ability to produce novel words through the generator. It also contains a coverage mechanism to keep track of what has been summarized, thus reducing word repetition. The pointer-generator networks have not been tested for summarizing content contributed by multiple authors. In this study we evaluate their performance on our collection of datasets.
For the ILP-based approaches, we use bigrams as concepts (bigrams consisting of only stopwords are removed) and term frequency as concept weights. We leverage the co-occurrence statistics both within and across the entire corpus. We also filtered out bigrams that appear only once in each corpus, yielding better ROUGE scores with lower computational cost. The results without using this low-frequency filtering are shown in the Appendix for comparison. In Table TABREF26 , we present summarization results evaluated by ROUGE BIBREF72 and human judges.
To compare with the official participants in DUC 2004 BIBREF79 , we selected the top-5 systems submitted in the competition (ranked by R-1), together with the 8 human annotators. The results are presented in Table TABREF27 .
ROUGE. It is a recall-oriented metric that compares system and reference summaries based on n-gram overlaps, which is widely used in summarization evaluation. In this work, we report ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-SU4 (R-SU4), and ROUGE-L (R-L) scores, which respectively measure the overlap of unigrams, bigrams, skip-bigram (with a maximum gap length of 4), and longest common subsequence. First, there is no winner for all data sets. MEAD is the best one on camera; SumBasic is best on Stat2016 and mostly on Stat2015; ILP is best on DUC04. The ILP baseline is comparable to the best participant (Table TABREF27 ) and even has the best R-2. PGN is the worst, which is not surprising since it is trained on a different data set, which may not generalize to our data sets. Our method ILP+MC is best on peer review and mostly on Eng and CS2016. Second, compared with ILP, our method works better on Eng, CS2016, movie, and peer.
These results show our proposed method does not always better than the ILP framework, and no single summarization system wins on all data sets. It is perhaps not surprising to some extent. The no free lunch theorem for machine learning BIBREF80 states that, averaged overall possible data-generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other BIBREF81 .
Human Evaluation. Because ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries, we further perform a human evaluation. For each task, we present a pair of system outputs in a random order, together with one human summary to five Amazon turkers. If there are multiple human summaries, we will present each human summary and the pair of system outputs to turkers. For student responses, we also present the prompt. An example Human Intelligence Task (HIT) is illustrated in Fig. FIGREF32 .
The turkers are asked to indicate their preference for system A or B based on the semantic resemblance to the human summary on a 5-Likert scale (`Strongly preferred A', `Slightly preferred A', `No preference', `Slightly preferred B', `Strongly preferred B'). They are rewarded $0.04 per task. We use two strategies to control the quality of the human evaluation. First, we require the turkers to have a HIT approval rate of 90% or above. Second, we insert some quality checkpoints by asking the turkers to compare two summaries of same text content but in different sentence orders. Turkers who did not pass these tests are filtered out. Due to budget constraints, we conduct pairwise comparisons for three systems. The total number of comparisons is 3 system-system pairs INLINEFORM0 5 turkers INLINEFORM1 (36 tasks INLINEFORM2 1 human summaries for Eng + 44 INLINEFORM3 2 for Stat2015 + 48 INLINEFORM4 2 for Stat2016 + 46 INLINEFORM5 2 for CS2016 + 3 INLINEFORM6 8 for camera + 3 INLINEFORM7 5 for movie + 3 INLINEFORM8 2 for peer + 50 INLINEFORM9 4 for DUC04) = 8,355. The number of tasks for each corpus is shown in Table TABREF14 . To elaborate as an example, for Stat2015, there are 22 lectures and 2 prompts for each lecture. Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary. Therefore, we have 44 INLINEFORM11 2 INLINEFORM12 3=264 HITs for Stat2015. Each HIT will be done by 5 different turkers, resulting in 264 INLINEFORM13 5=1,320 comparisons. In total, 306 unique turkers were recruited and on average 27.3 of HITs were completed by one turker. The distribution of the human preference scores is shown in Fig. FIGREF34 .
We calculate the percentage of “wins” (strong or slight preference) for each system among all comparisons with its counterparts. Results are reported in the last column of Table TABREF26 . ILP+MC is preferred significantly more often than ILP on Stat2015, CS2016, and DUC04. There is no significant difference between ILP+MC and SumBasic on student response data sets. Interestingly, a system with better ROUGE scores does not necessarily mean it is more preferred by humans. For example, ILP is preferred more on all three review data sets. Regarding the inter-annotator agreement, we find 48.5% of the individual judgements agree with the majority votes. The agreement scores decomposed by data sets and system pairs are shown in Table TABREF35 . Overall, the agreement scores are pretty low, compared to an agreement score achieved by randomly clicking (45.7%). It has several possibilities. The first one is that many turkers did click randomly (39 out of 160 failed our quality checkpoints). Unfortunately, we did not check all the turkers as we inserted the checkpoints randomly. The second possibility is that comparing two system summaries is difficult for humans, and thus it has a low agreement score. Xiong and Litman xiong-litman:2014:Coling also found that it is hard to make humans agree on the choice of summary sentences. A third possibility is that turkers needed to see the raw input sentences which are not shown in a HIT.
An interesting observation is that our approach produces summaries with more sentences, as shown in Table TABREF39 . The number of words in the summaries is approximately the same for all methods for a particular corpus, which is constrained by Eq. . For camera, movie and peer reviews, the number of sentences in human summary is 10, and SumBasic and ILP+MC produce more sentences than ILP. It is hard for people to judge which system summaries is closer to a human summary when the summaries are long (216, 242, and 190 words for camera, movie, and peer reviews respectively). For inter-annotator agreement, 50.3% of judgements agree with the majority votes for student response data sets, 47.6% for reviews, and only 46.3% for news documents. We hypothesize that for these long summaries, people may prefer short system summaries, and for short summaries, people may prefer long system summaries. We leave the examination of this finding to future work.
Table TABREF40 presents example system outputs. This offers an intuitive understanding of our proposed approach.
Analysis of Influential Factors
In this section, we want to investigate the impact of the low-rank approximation process to the ILP framework. Therefore, in the following experiments, we focus on the direct comparison with the ILP and ILP+MC and leave the comparison to other baselines as future work. The proposed method achieved better summarization performance on Eng, CS2016, movie, and peer than the ILP baseline. Unfortunately, it does not work as expected on two courses for student responses (Stat2015 and Stat2016), review camera and news documents. This leaves the research question when and why the proposed method works better. In order to investigate what are key factors that impact the performance, we would like to perform additional experiments using synthesized data sets.
A variety of attributes that might impact the performance are summarized in Table TABREF41 , categorized into two types. The input attributes are extracted from the input original documents and the summaries attributes are extracted from human summaries and the input documents as well. Here are some important attributes we expect to have a big impact on the performance.
The attributes extracted from the corpora are shown in Table TABREF42 . Note, a bigram that appears more often in original documents has a better chance to be included in human summaries as indicated by INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 . This verifies our choice to cut low-frequency bigrams.
According to the ROUGE scores, our method works better on Eng, CS2016, movie, and peer (Table TABREF26 ). If we group each attribute into two groups, corresponding to whether ILP+MC works better, we do not find significant differences among these attributes. To further understand which factors impact the performance and have more predictive power, we train a binary classification decision tree by treating the 4 working corpora as positive examples and the remaining 4 as negative examples.
According to the decision tree model, there is only one decision point in the tree: INLINEFORM0 , the ratio of bigrams in human summaries that are in the input only once. Generally, our proposed method works if INLINEFORM1 , except for camera. When INLINEFORM2 is low, it means that annotators either adopt concepts that appear multiple times or just use their own. In this case, the frequency-based weighting (i.e., INLINEFORM3 in Eq. EQREF5 ) can capture the concepts that appear multiple times. On the other hand, when INLINEFORM4 is high, it means that a big number of bigrams appeared only once in the input document. In this case, annotators have difficulty selecting a representative one due to the ambiguous choice. Therefore, we hypothesize,
To test the predictive power of this attribute, we want to test it on new data sets. Unfortunately, creating new data sets with gold-standard human summaries is expensive and time-consuming, and the new data set may not have the desired property within a certain range of INLINEFORM0 . Therefore, we propose to manipulate the ratio and create new data sets using the existing data sets without additional human annotation. INLINEFORM1 can be represented as follows, DISPLAYFORM0
where INLINEFORM0 INLINEFORM1
There are two different ways to control the ratio, both involving removing input sentences with certain constraints.
In this way, we obtained different levels of INLINEFORM0 by deleting sentences. The ROUGE scores on the synthesized corpus are shown in Table TABREF52 .
Our hypothesis H2 is partially valid. When increasing the ratio, ILP+MC has a relative advantage gain over ILP. For example, for Stat2015, ILP+MC is not significantly worse than ILP any more when increasing the ratio from 11.9 to 18.1. For camera, ILP+MC becomes better than ILP when increasing the ratio from 84.9 to 85.8. For Stat2016, CS2016, Eng, more improvements or significant improvements can be found for ILP+MC compared to ILP when increasing the ratio. However, for movie and peer review, ILP+MC is worse than ILP when increasing the ratio.
We have investigated a number of attributes that might impact the performance of our proposed method. Unfortunately, we do not have a conclusive answer when our method works better. However, we would like to share some thoughts about it.
First, our proposed method works better on two student responses courses (Eng and CS2016), but not the other two (Stat2015 and Stat2016). An important factor we ignored is that the students from the other two courses are not native English speakers, resulting in significantly shorter responses (4.3 INLINEFORM0 6.0 INLINEFORM1 8.8, 9.1, INLINEFORM2 , Table TABREF42 , the row with id=11). With shorter sentences, there will be less context to leverage the low-rank approximation.
Second, our proposed method works better on movie and peer reviews, but not camera reviews. As pointed out by Xiong xiong2015helpfulness, both movie reviews and peer reviews are potentially more complicated than the camera reviews, as the review content consists of both the reviewer's evaluations of the subject (e.g., a movie or paper) and the reviewer's references of the subject, where the subject itself is full of content (e.g., movie plot, papers). In contrast, such references in product reviews are usually the mentions of product components or properties, which have limited variations. This characteristic makes review summarization more challenging in these two domains.
Conclusion
We made the first effort to summarize student feedback using an Integer Linear Programming framework with a low-rank matrix approximation, and applied it to different types of data sets including news articles, product, and peer reviews. Our approach allows sentences to share co-occurrence statistics and alleviates sparsity issue. Our experiments showed that the proposed approach performs better against a range of baselines on the student response Eng and CS2016 on ROUGE scores, but not other courses.
ROUGE is often adopted in research papers to evaluate the quality of summarization because it is fast and is correlated well to human evaluation BIBREF72 , BIBREF82 . However, it is also criticized that ROUGE cannot thoroughly capture the semantic similarity between system and reference summaries. Different alternatives have been proposed to enhance ROUGE. For example, Graham rankel2016statistical proposed to use content-oriented features in conjunction with linguistic features. Similarly, Cohan and Goharian COHAN16.1144 proposed to use content relevance. At the same time, many researchers supplement ROUGE with a manual evaluation. This is why we conduct evaluations using both ROUGE and human evaluation in this work.
However, we found that a system with better ROUGE scores does not necessarily mean it is more preferred by humans (§ SECREF28 ). For example, ILP is preferred more on all three review data sets even if it got lower ROUGE scores than the other systems. It coincides with the fact that the ILP generated shorter summaries in terms of the number of sentences than the other two systems (Table TABREF39 ).
We also investigated a variety of attributes that might impact the performance on a range of data sets. Unfortunately, we did not have a conclusive answer when our method will work better.
In the future, we would like to conduct a large-scale intrinsic evaluation to examine whether the low-rank matrix approximation captures similar bigrams or not and want to investigate more attributes, such as new metrics for diversity. We would like to explore the opportunities by combing a vector sentence representation learned by a neural network and the ILP framework. | No |
f8d32088d17b32b0c877d59965b35c4f51f0ceea | f8d32088d17b32b0c877d59965b35c4f51f0ceea_0 | Q: Do the authors report on English datasets only?
Text: Introduction
Street gangs are defined as “a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise” BIBREF0 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Today, over 1.4 million people, belonging to more than 33,000 gangs, are active in the United States BIBREF2 , of which 88% identify themselves as being members of a street gang. They are also active users of social media BIBREF2 ; according to 2007 National Assessment Center's survey of gang members, 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF3 . More recent studies report approximately 45% of gang members participate in online offending activities such as threatening, harassing individuals, posting violent videos or attacking someone on the street for something they said online BIBREF4 , BIBREF5 . They confirm that gang members use social media to express themselves in ways similar to their offline behavior on the streets BIBREF6 .
Despite its public nature, gang members post on social media without fear of consequences because there are only few tools law enforcement can presently use to surveil social media BIBREF7 . For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF8 . From offline clues, the officers monitor just a selected set of social media accounts which are manually discovered and related to a specific investigation. Thus, developing tools to identify gang member profiles on social media is an important step in the direction of using machine intelligence to fight crime.
To help agencies monitor gang activity on social media, our past work investigated how features from Twitter profiles, including profile text, profile images, tweet text, emjoi use, and their links to YouTube, may be used to reliably find gang member profiles BIBREF9 . The diverse set of features, chosen to combat the fact that gang members often use local terms and hashtags in their posts, offered encouraging results. In this paper, we report our experience in integrating deep learning into our gang member profile classifier. Specifically, we investigate the effect of translating the features into a vector space using word embeddings BIBREF10 . This idea is motivated by the recent success of word embeddings-based methods to learn syntactic and semantic structures automatically when provided with large datasets. A dataset of over 3,000 gang and non-gang member profiles that we previously curated is used to train the word embeddings. We show that pre-trained word embeddings improve the machine learning models and help us obtain an INLINEFORM0 -score of INLINEFORM1 on gang member profiles (a 6.39% improvement in INLINEFORM2 -score compared to the baseline models which were not trained using word embeddings).
This paper is organized as follows. Section SECREF2 discusses the related literature and frames how this work differs from other related works. Section SECREF3 discusses our approach based on word embeddings to identify gang member profiles. Section SECREF4 reports on the evaluation of the proposed approach and the evaluation results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Researchers have begun investigating the gang members' use of social media and have noticed the importance of identifying gang members' Twitter profiles a priori BIBREF6 , BIBREF7 . Before analyzing any textual context retrieved from their social media posts, knowing that a post has originated from a gang member could help systems to better understand the message conveyed by that post. Wijeratne et al. developed a framework to analyze what gang members post on social media BIBREF7 . Their framework could only extract social media posts from self identified gang members by searching for pre-identified gang names in a user's Twitter profile description. Patton et al. developed a method to collect tweets from a group of gang members operating in Detroit, MI BIBREF11 . However, their approach required the gang members' Twitter profile names to be known beforehand, and data collection was localized to a single city in the country. These studies investigated a small set of manually curated gang member profiles, often from a small geographic area that may bias their findings.
In our previous work BIBREF9 , we curated what may be the largest set of gang member profiles to study how gang member Twitter profiles can be automatically identified based on the content they share online. A data collection process involving location neutral keywords used by gang members, with an expanded search of their retweet, friends and follower networks, led to identifying 400 authentic gang member profiles on Twitter. Our study discovered that the text in their tweets and profile descriptions, their emoji use, their profile images, and music interests embodied by links to YouTube music videos, can help a classifier distinguish between gang and non-gang member profiles. While a very promising INLINEFORM0 measure with low false positive rate was achieved, we hypothesize that the diverse kinds and the multitude of features employed (e.g. unigrams of tweet text) could be amenable to an improved representation for classification. We thus explore the possibility of mapping these features into a considerably smaller feature space through the use of word embeddings.
Previous research has shown word embeddings-based methods can significantly improve short text classification BIBREF12 , BIBREF13 . For example, Lilleberget et al. showed that word embeddings weighted by INLINEFORM0 - INLINEFORM1 outperforms other variants of word embedding models discussed in BIBREF13 , after training word embedding models on over 18,000 newsgroup posts. Wang et al. showed that short text categorization can be improved by word embeddings with the help of a neural network model that feeds semantic cliques learned over word embeddings in to a convolutions neural network BIBREF12 . We believe our corpus of gang and non-gang member tweets, with nearly 64.6 million word tokens, could act as a rich resource to train word embeddings for distinguishing gang and non-gang member Twitter users. Our investigation differs from other word embeddings-based text classification systems such as BIBREF12 , BIBREF13 due to the fact that we use multiple feature types including emojis in tweets and image tags extracted from Twitter profile and cover images in our classification task.
Word Embeddings
A word embedding model is a neural network that learns rich representations of words in a text corpus. It takes data from a large, INLINEFORM0 -dimensional `word space' (where INLINEFORM1 is the number of unique words in a corpus) and learns a transformation of the data into a lower INLINEFORM2 -dimensional space of real numbers. This transformation is developed in a way that similarities between the INLINEFORM3 -dimensional vector representation of two words reflects semantic relationships among the words themselves. These semantics are not captured by typical bag-of-words or INLINEFORM4 -gram models for classification tasks on text data BIBREF14 , BIBREF10 .
Word embeddings have led to the state-of-the-art results in many sequential learning tasks BIBREF15 . In fact, word embedding learning is an important step for many statistical language modeling tasks in text processing systems. Bengio et al. were the first ones to introduce the idea of learning a distributed representation for words over a text corpus BIBREF16 . They learned representations for each word in the text corpus using a neural network model that modeled the joint probability function of word sequences in terms of the feature vectors of the words in the sequence. Mikolov et al. showed that simple algebraic operations can be performed on word embeddings learned over a text corpus, which leads to findings such as the word embedding vector of the word “King” INLINEFORM0 the word embedding vectors of “Man” INLINEFORM1 “Woman” would results in a word embedding vector that is closest to the word embedding vector of the word “Queen” BIBREF14 . Recent successes in using word embeddings to improve text classification for short text BIBREF12 , BIBREF13 , encouraged us to explore how they can be used to improve gang and non-gang member Twitter profile classification.
Word embeddings can be performed under different neural network architectures; two popular ones are the Continuous Bag-of-Words (CBOW) and Continuous Skip-gram (Skip-gram) models BIBREF17 . The CBOW model learns a neural network such that given a set of context words surrounding a target word, it predict a target word. The Skip-gram model differs by predicting context words given a target word and by capturing the ordering of word occurrences. Recent improvements to Skip-gram model make it better able to handle less frequent words, especially when negative sampling is used BIBREF10 .
Features considered
Gang member tweets and profile descriptions tend to have few textual indicators that demonstrate their gang affiliations or their tweets/profile text may carry acronyms which can only be deciphered by others involved in gang culture BIBREF9 . These gang-related terms are often local to gangs operating in neighborhoods and change rapidly when they form new gangs. Consequently, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, we use heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music preferences and affinity. In this section, we briefly discuss the feature types and their broad differences in gang and non-gang member profiles. An in-depth explanation of these feature selection can be found in BIBREF9 .
Tweet text: In our previous work, we observed that gang members use curse words nearly five times more than the average curse words use on Twitter BIBREF9 . Further, we noticed that gang members mainly use Twitter to discuss drugs and money using terms such as smoke, high, hit, money, got, and need while non-gang members mainly discuss their feelings using terms such as new, like, love, know, want, and look.
Twitter profile description: We found gang member profile descriptions to be rife with curse words (nigga, fuck, and shit) while non-gang members use words related to their feelings or interests (love, life, music, and book). We noticed that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members as about INLINEFORM0 of gang member Twitter profiles used terms such as rip and free.
Emoji features: We found that the fuel pump emoji was the most frequently used emoji by gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji was the second most frequently used emoji, which is often used with the police cop emoji in an `emoji chain' to express their hatred towards law enforcement officers. The money bag emoji, money with wings emoji, unlock emoji, and a variety of the angry face emoji such as the devil face emoji and imp emoji were also common in gang members' but not in non-gang members' tweets.
Twitter profile and cover images: We noticed that gang members often pose holding or pointing weapons, seen in a group fashion which displays a gangster culture, show off graffiti, hand signs, tattoos, and bulk cash in their profile and cover images. We used Clarifai web service to tag the profile and cover images of the Twitter users in our dataset and used the image tags returned by Clarifai API to train word embeddings. Tags such as trigger, bullet, and worship were unique for gang member profiles while non-gang member images had unique tags such as beach, seashore, dawn, wildlife, sand, and pet.
YouTube videos: We found that 51.25% of the gang members in our dataset have a tweet that links to a YouTube video. Further, we found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre BIBREF9 . Moreover, we found that eight YouTube links are shared on average by a gang member. The top 5 terms used in YouTube videos shared by gang members were shit, like, nigga, fuck, and lil while like, love, peopl, song, and get were the top 5 terms in non-gang members' video data.
Classification approach
Figure FIGREF11 gives an overview of the steps to learn word embeddings and to integrate them into a classifier. We first convert any non-textual features such as emoji and profile images into textual features. We use Emoji for Python and Clarifai services, respectively, to convert emoji and images into text. Prior to training the word embeddings, we remove all the seed words used to find gang member profiles and stopwords, and perform stemming across all tweets and profile descriptions. We then feed all the training data (word INLINEFORM0 in Figure FIGREF11 ) we collected from our Twitter dataset to Word2Vec tool and train it using a Skip-gram model with negative sampling. When training the Skip-gram model, we set the negative sampling rate to 10 sample words, which seems to work well with medium-sized datasets BIBREF10 . We set the context word window to be 5, so that it will consider 5 words to left and right of the target word (words INLINEFORM1 to INLINEFORM2 in Figure FIGREF11 ). This setting is suitable for sentences where average sentence length is less than 11 words, as is the case in tweets BIBREF18 . We ignore the words that occur less than 5 times in our training corpus.
We investigated how well the local language has been captured by the word embedding models we trained. We used the `most similar' functionality offered by Word2Vec tool to understand what the model has learned about few gang-related slang terms which are specific to Chicago area. For example, we analyzed the ten most similar words learned by the word embedding model for the term BDK (Black Desciples Killers). We noticed that out of the 10 most similar words, five were names of local Chicago gangs, which are rivals of the Black Disciples Gang, two were different syntactic variations of BDK (bdkk, bdkkk) and the other three were different syntactic variations of GDK (gdk, gdkk, gdkkk). GDK is a local gang slang for `Gangster Disciples Killer' which is used by rivals of Gangster Disciples gang to show their hatred towards them. We found similar results for the term GDK. Out of the ten most similar words, six were showing hatred towards six different Gangster Disciples gangs that operate in Chicago area. We believe that those who used the term GDK to show their hatred towards Gangster Disciples gangs might be also having rivalry with the six gangs we found.
We obtain word vectors of size 300 from the learned word embeddings. To represent a Twitter profile, we retrieve word vectors for all the words that appear in a particular profile including the words appear in tweets, profile description, words extracted from emoji, cover and profile images converted to textual formats, and words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline. Those word vectors are combined to compute the final feature vector for the Twitter profile. To combine the word vectors, we consider five different methods. Letting the size of a word vector be INLINEFORM0 , for a Twitter profile INLINEFORM1 with INLINEFORM2 unique words and the vector of the INLINEFORM3 word in INLINEFORM4 denoted by INLINEFORM5 , we compute the feature vector for the Twitter profile INLINEFORM6 by:
Sum of word embeddings INLINEFORM0 . This is the sum the word embedding vectors obtained for all words in a Twitter profile: INLINEFORM1
Mean of word embeddings INLINEFORM0 . This is the mean of the word embedding vectors of all words found in a Twitter profile: INLINEFORM1
Sum of word embeddings weighted by term frequency INLINEFORM0 . This is each word embedding vector multiplied by the word's frequency for the Twitter profile: INLINEFORM1
where INLINEFORM0 is the term frequency for the INLINEFORM1 word in profile INLINEFORM2 .
Sum of word embeddings weighted by INLINEFORM0 - INLINEFORM1 INLINEFORM2 . This is each word vector multiplied by the word's INLINEFORM3 - INLINEFORM4 for the Twitter profile: INLINEFORM5
where INLINEFORM0 is the INLINEFORM1 - INLINEFORM2 value for the INLINEFORM3 word in profile INLINEFORM4 .
Mean of word embeddings weighted by term frequency INLINEFORM0 . This is the mean of the word embedding vectors weighted by term frequency: INLINEFORM1
Evaluation
We evaluate the performance of using word embeddings to discover gang member profiles on Twitter. We first discuss the dataset, learning algorithms and baseline comparison models used in the experiments. Then we discuss the 10-fold cross validation experiments and the evaluation matrices used. Finally we present the results of the experiments.
Evaluation setup
We consider a dataset of curated gang and non-gang members' Twitter profiles collected from our previous work BIBREF9 . It was developed by querying the Followerwonk Web service API with location-neutral seed words known to be used by gang members across the U.S. in their Twitter profiles. The dataset was further expanded by examining the friends, follower, and retweet networks of the gang member profiles found by searching for seed words. Specific details about our data curation procedure are discussed in BIBREF9 . Ultimately, this dataset consists of 400 gang member profiles and 2,865 non-gang member profiles. For each user profile, we collected up to most recent 3,200 tweets from their Twitter timelines, profile description text, profile and cover images, and the comments and video descriptions for every YouTube video shared by them. Table 1 provides statistics about the number of words found in each type of feature in the dataset. It includes a total of 821,412 tweets from gang members and 7,238,758 tweets from non-gang members.
To build the classifiers we used three different learning algorithms, namely Logistic Regression (LR), Random Forest (RF), and Support Vector Machines (SVM). We used version 0.17.1 of scikit-learn machine learning library for Python to implement the classifiers. An open source tool of Python, Gensim BIBREF19 was used to generate the word embeddings. We compare our results with the two best performing systems reported in BIBREF9 which are the two state-of-the-art models for identifying gang members in Twitter. Both baseline models are built from a random forest classifier trained over term frequencies for unigrams in tweet text, emoji, profile data, YouTube video data and image tags. Baseline Model(1) considers all 3,285 gang and non-gang member profiles in our dataset. Baseline Model(2) considers all Twitter profiles that contain every feature type discussed in Section SECREF2 . Because a Twitter profile may not have every feature type, baseline Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. However, we compare our results to both baseline models and report the improvements.
10-fold cross validation
We conducted 10-fold cross validation experiments to evaluate the performance of our models. We used all Twitter profiles in the dataset to conduct experiments on the five methods we used to combine word embedding vectors. For each of the five vector combination methods (as mentioned in Section SECREF9 ), we trained classifiers using each learning algorithm we considered. In each fold, the training set was used to generate the word vectors, which were then used to compute features for both the training set and test set. For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' (positive) and `non-gang' (negative) classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the `gang' and `non-gang' classes separately because of the class imbalance in the dataset.
Experimental results
Table TABREF22 presents 10-fold cross validation results for the baseline models (first and second rows) and our word embeddings-based models (from third row to seventh row). As mentioned earlier both baseline models use a random forest classifier trained on term frequencies of unigram features extracted from all feature types. The two baseline models only differs on the training data filtering method used, which is based on the availability of features in the training dataset as described in BIBREF9 . The baseline Model(1) uses all profiles in the dataset and has a INLINEFORM0 -score of 0.7364 for `gang' class and 0.9690 for `non-gang' class. The baseline Model(2) which only uses profiles that contain each and every feature type has a INLINEFORM1 -score of 0.7755 for `gang' class and INLINEFORM2 -score of 0.9720 for `non-gang' class.
Vector sum is one of the basic operations we can perform on word embedding vectors. The random forest classifier performs the best among vector sum-based classifiers where logistic regression and SVM classifiers also perform comparatively well ( INLINEFORM0 ). Using vector mean ( INLINEFORM1 ) improves all classifier results and SVM classifier trained on the mean of word embeddings achieves very close results to the baseline Model(2). Multiplying vector sum with corresponding word counts for each word in word embeddings degrades the classifier accuracy for correctly identifying the positive class ( INLINEFORM2 ). When we multiply words by their corresponding INLINEFORM3 - INLINEFORM4 values before taking the vector sum, we again observe an increase in the classifiers' accuracy ( INLINEFORM5 ). We achieve the best performance by averaging the vector sum weighted by term frequency ( INLINEFORM6 ). Here we multiply the mean of the word embeddings by count of each word, which beats all other word embeddings-based models and the two baselines. In this setting, logistic regression classifier trained on word embeddings performs the best with a INLINEFORM7 -score of 0.7835. This is a 6.39% improvement in performance when compared to the baseline Model(1) and a 1.03% improvement in performance when compared to baseline Model(2). Overall, out of the five vector operations that we used to train machine learning classifiers, four gave us classifier models that beat baseline Model(1) and two vector based operations gave us classifier models that either achieved very similar results to baseline Model(2) or beat it. This evaluation demonstrates the promise of using pre-trained word embeddings to boost the accuracy of supervised learning algorithms for Twitter gang member profile classification.
Conclusion and Future Work
This paper presented a word embeddings-based approach to address the problem of automatically identifying gang member profiles on Twitter. Using a Twitter user dataset that consist of 400 gang member and 2,865 non gang member profiles, we trained word embedding models based on users' tweets, profile descriptions, emoji, images, and videos shared on Twitter (textual features extracted from images, and videos). We then use the pre-trained word embedding models to train supervised machine learning classifiers, which showed superior performance when compared to the state-of-the-art baseline models reported in the literature. We plan to further extend our work by building our own image classification system specifically designed to identify images commonly shared by gang members such as guns, gang hand signs, stacks of cash and drugs. We would also like to experiment with automatically building dictionaries that contain gang names and gang-related slang using crowd-sourced gang-related knowledge-bases such as HipWiki. We also want to experiment with using such knowledge-bases to train word embeddings to understand whether having access to gang-related knowledge could boost the performance of our models. Finally, we would like to study how we can further use social networks of known gang members to identify new gang member profiles on Twitter. | Yes |
4f0f446bf4518af7f539f6283145135192d5c00b | 4f0f446bf4518af7f539f6283145135192d5c00b_0 | Q: Which supervised learning algorithms are used in the experiments?
Text: Introduction
Street gangs are defined as “a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise” BIBREF0 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Today, over 1.4 million people, belonging to more than 33,000 gangs, are active in the United States BIBREF2 , of which 88% identify themselves as being members of a street gang. They are also active users of social media BIBREF2 ; according to 2007 National Assessment Center's survey of gang members, 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF3 . More recent studies report approximately 45% of gang members participate in online offending activities such as threatening, harassing individuals, posting violent videos or attacking someone on the street for something they said online BIBREF4 , BIBREF5 . They confirm that gang members use social media to express themselves in ways similar to their offline behavior on the streets BIBREF6 .
Despite its public nature, gang members post on social media without fear of consequences because there are only few tools law enforcement can presently use to surveil social media BIBREF7 . For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF8 . From offline clues, the officers monitor just a selected set of social media accounts which are manually discovered and related to a specific investigation. Thus, developing tools to identify gang member profiles on social media is an important step in the direction of using machine intelligence to fight crime.
To help agencies monitor gang activity on social media, our past work investigated how features from Twitter profiles, including profile text, profile images, tweet text, emjoi use, and their links to YouTube, may be used to reliably find gang member profiles BIBREF9 . The diverse set of features, chosen to combat the fact that gang members often use local terms and hashtags in their posts, offered encouraging results. In this paper, we report our experience in integrating deep learning into our gang member profile classifier. Specifically, we investigate the effect of translating the features into a vector space using word embeddings BIBREF10 . This idea is motivated by the recent success of word embeddings-based methods to learn syntactic and semantic structures automatically when provided with large datasets. A dataset of over 3,000 gang and non-gang member profiles that we previously curated is used to train the word embeddings. We show that pre-trained word embeddings improve the machine learning models and help us obtain an INLINEFORM0 -score of INLINEFORM1 on gang member profiles (a 6.39% improvement in INLINEFORM2 -score compared to the baseline models which were not trained using word embeddings).
This paper is organized as follows. Section SECREF2 discusses the related literature and frames how this work differs from other related works. Section SECREF3 discusses our approach based on word embeddings to identify gang member profiles. Section SECREF4 reports on the evaluation of the proposed approach and the evaluation results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Researchers have begun investigating the gang members' use of social media and have noticed the importance of identifying gang members' Twitter profiles a priori BIBREF6 , BIBREF7 . Before analyzing any textual context retrieved from their social media posts, knowing that a post has originated from a gang member could help systems to better understand the message conveyed by that post. Wijeratne et al. developed a framework to analyze what gang members post on social media BIBREF7 . Their framework could only extract social media posts from self identified gang members by searching for pre-identified gang names in a user's Twitter profile description. Patton et al. developed a method to collect tweets from a group of gang members operating in Detroit, MI BIBREF11 . However, their approach required the gang members' Twitter profile names to be known beforehand, and data collection was localized to a single city in the country. These studies investigated a small set of manually curated gang member profiles, often from a small geographic area that may bias their findings.
In our previous work BIBREF9 , we curated what may be the largest set of gang member profiles to study how gang member Twitter profiles can be automatically identified based on the content they share online. A data collection process involving location neutral keywords used by gang members, with an expanded search of their retweet, friends and follower networks, led to identifying 400 authentic gang member profiles on Twitter. Our study discovered that the text in their tweets and profile descriptions, their emoji use, their profile images, and music interests embodied by links to YouTube music videos, can help a classifier distinguish between gang and non-gang member profiles. While a very promising INLINEFORM0 measure with low false positive rate was achieved, we hypothesize that the diverse kinds and the multitude of features employed (e.g. unigrams of tweet text) could be amenable to an improved representation for classification. We thus explore the possibility of mapping these features into a considerably smaller feature space through the use of word embeddings.
Previous research has shown word embeddings-based methods can significantly improve short text classification BIBREF12 , BIBREF13 . For example, Lilleberget et al. showed that word embeddings weighted by INLINEFORM0 - INLINEFORM1 outperforms other variants of word embedding models discussed in BIBREF13 , after training word embedding models on over 18,000 newsgroup posts. Wang et al. showed that short text categorization can be improved by word embeddings with the help of a neural network model that feeds semantic cliques learned over word embeddings in to a convolutions neural network BIBREF12 . We believe our corpus of gang and non-gang member tweets, with nearly 64.6 million word tokens, could act as a rich resource to train word embeddings for distinguishing gang and non-gang member Twitter users. Our investigation differs from other word embeddings-based text classification systems such as BIBREF12 , BIBREF13 due to the fact that we use multiple feature types including emojis in tweets and image tags extracted from Twitter profile and cover images in our classification task.
Word Embeddings
A word embedding model is a neural network that learns rich representations of words in a text corpus. It takes data from a large, INLINEFORM0 -dimensional `word space' (where INLINEFORM1 is the number of unique words in a corpus) and learns a transformation of the data into a lower INLINEFORM2 -dimensional space of real numbers. This transformation is developed in a way that similarities between the INLINEFORM3 -dimensional vector representation of two words reflects semantic relationships among the words themselves. These semantics are not captured by typical bag-of-words or INLINEFORM4 -gram models for classification tasks on text data BIBREF14 , BIBREF10 .
Word embeddings have led to the state-of-the-art results in many sequential learning tasks BIBREF15 . In fact, word embedding learning is an important step for many statistical language modeling tasks in text processing systems. Bengio et al. were the first ones to introduce the idea of learning a distributed representation for words over a text corpus BIBREF16 . They learned representations for each word in the text corpus using a neural network model that modeled the joint probability function of word sequences in terms of the feature vectors of the words in the sequence. Mikolov et al. showed that simple algebraic operations can be performed on word embeddings learned over a text corpus, which leads to findings such as the word embedding vector of the word “King” INLINEFORM0 the word embedding vectors of “Man” INLINEFORM1 “Woman” would results in a word embedding vector that is closest to the word embedding vector of the word “Queen” BIBREF14 . Recent successes in using word embeddings to improve text classification for short text BIBREF12 , BIBREF13 , encouraged us to explore how they can be used to improve gang and non-gang member Twitter profile classification.
Word embeddings can be performed under different neural network architectures; two popular ones are the Continuous Bag-of-Words (CBOW) and Continuous Skip-gram (Skip-gram) models BIBREF17 . The CBOW model learns a neural network such that given a set of context words surrounding a target word, it predict a target word. The Skip-gram model differs by predicting context words given a target word and by capturing the ordering of word occurrences. Recent improvements to Skip-gram model make it better able to handle less frequent words, especially when negative sampling is used BIBREF10 .
Features considered
Gang member tweets and profile descriptions tend to have few textual indicators that demonstrate their gang affiliations or their tweets/profile text may carry acronyms which can only be deciphered by others involved in gang culture BIBREF9 . These gang-related terms are often local to gangs operating in neighborhoods and change rapidly when they form new gangs. Consequently, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, we use heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music preferences and affinity. In this section, we briefly discuss the feature types and their broad differences in gang and non-gang member profiles. An in-depth explanation of these feature selection can be found in BIBREF9 .
Tweet text: In our previous work, we observed that gang members use curse words nearly five times more than the average curse words use on Twitter BIBREF9 . Further, we noticed that gang members mainly use Twitter to discuss drugs and money using terms such as smoke, high, hit, money, got, and need while non-gang members mainly discuss their feelings using terms such as new, like, love, know, want, and look.
Twitter profile description: We found gang member profile descriptions to be rife with curse words (nigga, fuck, and shit) while non-gang members use words related to their feelings or interests (love, life, music, and book). We noticed that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members as about INLINEFORM0 of gang member Twitter profiles used terms such as rip and free.
Emoji features: We found that the fuel pump emoji was the most frequently used emoji by gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji was the second most frequently used emoji, which is often used with the police cop emoji in an `emoji chain' to express their hatred towards law enforcement officers. The money bag emoji, money with wings emoji, unlock emoji, and a variety of the angry face emoji such as the devil face emoji and imp emoji were also common in gang members' but not in non-gang members' tweets.
Twitter profile and cover images: We noticed that gang members often pose holding or pointing weapons, seen in a group fashion which displays a gangster culture, show off graffiti, hand signs, tattoos, and bulk cash in their profile and cover images. We used Clarifai web service to tag the profile and cover images of the Twitter users in our dataset and used the image tags returned by Clarifai API to train word embeddings. Tags such as trigger, bullet, and worship were unique for gang member profiles while non-gang member images had unique tags such as beach, seashore, dawn, wildlife, sand, and pet.
YouTube videos: We found that 51.25% of the gang members in our dataset have a tweet that links to a YouTube video. Further, we found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre BIBREF9 . Moreover, we found that eight YouTube links are shared on average by a gang member. The top 5 terms used in YouTube videos shared by gang members were shit, like, nigga, fuck, and lil while like, love, peopl, song, and get were the top 5 terms in non-gang members' video data.
Classification approach
Figure FIGREF11 gives an overview of the steps to learn word embeddings and to integrate them into a classifier. We first convert any non-textual features such as emoji and profile images into textual features. We use Emoji for Python and Clarifai services, respectively, to convert emoji and images into text. Prior to training the word embeddings, we remove all the seed words used to find gang member profiles and stopwords, and perform stemming across all tweets and profile descriptions. We then feed all the training data (word INLINEFORM0 in Figure FIGREF11 ) we collected from our Twitter dataset to Word2Vec tool and train it using a Skip-gram model with negative sampling. When training the Skip-gram model, we set the negative sampling rate to 10 sample words, which seems to work well with medium-sized datasets BIBREF10 . We set the context word window to be 5, so that it will consider 5 words to left and right of the target word (words INLINEFORM1 to INLINEFORM2 in Figure FIGREF11 ). This setting is suitable for sentences where average sentence length is less than 11 words, as is the case in tweets BIBREF18 . We ignore the words that occur less than 5 times in our training corpus.
We investigated how well the local language has been captured by the word embedding models we trained. We used the `most similar' functionality offered by Word2Vec tool to understand what the model has learned about few gang-related slang terms which are specific to Chicago area. For example, we analyzed the ten most similar words learned by the word embedding model for the term BDK (Black Desciples Killers). We noticed that out of the 10 most similar words, five were names of local Chicago gangs, which are rivals of the Black Disciples Gang, two were different syntactic variations of BDK (bdkk, bdkkk) and the other three were different syntactic variations of GDK (gdk, gdkk, gdkkk). GDK is a local gang slang for `Gangster Disciples Killer' which is used by rivals of Gangster Disciples gang to show their hatred towards them. We found similar results for the term GDK. Out of the ten most similar words, six were showing hatred towards six different Gangster Disciples gangs that operate in Chicago area. We believe that those who used the term GDK to show their hatred towards Gangster Disciples gangs might be also having rivalry with the six gangs we found.
We obtain word vectors of size 300 from the learned word embeddings. To represent a Twitter profile, we retrieve word vectors for all the words that appear in a particular profile including the words appear in tweets, profile description, words extracted from emoji, cover and profile images converted to textual formats, and words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline. Those word vectors are combined to compute the final feature vector for the Twitter profile. To combine the word vectors, we consider five different methods. Letting the size of a word vector be INLINEFORM0 , for a Twitter profile INLINEFORM1 with INLINEFORM2 unique words and the vector of the INLINEFORM3 word in INLINEFORM4 denoted by INLINEFORM5 , we compute the feature vector for the Twitter profile INLINEFORM6 by:
Sum of word embeddings INLINEFORM0 . This is the sum the word embedding vectors obtained for all words in a Twitter profile: INLINEFORM1
Mean of word embeddings INLINEFORM0 . This is the mean of the word embedding vectors of all words found in a Twitter profile: INLINEFORM1
Sum of word embeddings weighted by term frequency INLINEFORM0 . This is each word embedding vector multiplied by the word's frequency for the Twitter profile: INLINEFORM1
where INLINEFORM0 is the term frequency for the INLINEFORM1 word in profile INLINEFORM2 .
Sum of word embeddings weighted by INLINEFORM0 - INLINEFORM1 INLINEFORM2 . This is each word vector multiplied by the word's INLINEFORM3 - INLINEFORM4 for the Twitter profile: INLINEFORM5
where INLINEFORM0 is the INLINEFORM1 - INLINEFORM2 value for the INLINEFORM3 word in profile INLINEFORM4 .
Mean of word embeddings weighted by term frequency INLINEFORM0 . This is the mean of the word embedding vectors weighted by term frequency: INLINEFORM1
Evaluation
We evaluate the performance of using word embeddings to discover gang member profiles on Twitter. We first discuss the dataset, learning algorithms and baseline comparison models used in the experiments. Then we discuss the 10-fold cross validation experiments and the evaluation matrices used. Finally we present the results of the experiments.
Evaluation setup
We consider a dataset of curated gang and non-gang members' Twitter profiles collected from our previous work BIBREF9 . It was developed by querying the Followerwonk Web service API with location-neutral seed words known to be used by gang members across the U.S. in their Twitter profiles. The dataset was further expanded by examining the friends, follower, and retweet networks of the gang member profiles found by searching for seed words. Specific details about our data curation procedure are discussed in BIBREF9 . Ultimately, this dataset consists of 400 gang member profiles and 2,865 non-gang member profiles. For each user profile, we collected up to most recent 3,200 tweets from their Twitter timelines, profile description text, profile and cover images, and the comments and video descriptions for every YouTube video shared by them. Table 1 provides statistics about the number of words found in each type of feature in the dataset. It includes a total of 821,412 tweets from gang members and 7,238,758 tweets from non-gang members.
To build the classifiers we used three different learning algorithms, namely Logistic Regression (LR), Random Forest (RF), and Support Vector Machines (SVM). We used version 0.17.1 of scikit-learn machine learning library for Python to implement the classifiers. An open source tool of Python, Gensim BIBREF19 was used to generate the word embeddings. We compare our results with the two best performing systems reported in BIBREF9 which are the two state-of-the-art models for identifying gang members in Twitter. Both baseline models are built from a random forest classifier trained over term frequencies for unigrams in tweet text, emoji, profile data, YouTube video data and image tags. Baseline Model(1) considers all 3,285 gang and non-gang member profiles in our dataset. Baseline Model(2) considers all Twitter profiles that contain every feature type discussed in Section SECREF2 . Because a Twitter profile may not have every feature type, baseline Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. However, we compare our results to both baseline models and report the improvements.
10-fold cross validation
We conducted 10-fold cross validation experiments to evaluate the performance of our models. We used all Twitter profiles in the dataset to conduct experiments on the five methods we used to combine word embedding vectors. For each of the five vector combination methods (as mentioned in Section SECREF9 ), we trained classifiers using each learning algorithm we considered. In each fold, the training set was used to generate the word vectors, which were then used to compute features for both the training set and test set. For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' (positive) and `non-gang' (negative) classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the `gang' and `non-gang' classes separately because of the class imbalance in the dataset.
Experimental results
Table TABREF22 presents 10-fold cross validation results for the baseline models (first and second rows) and our word embeddings-based models (from third row to seventh row). As mentioned earlier both baseline models use a random forest classifier trained on term frequencies of unigram features extracted from all feature types. The two baseline models only differs on the training data filtering method used, which is based on the availability of features in the training dataset as described in BIBREF9 . The baseline Model(1) uses all profiles in the dataset and has a INLINEFORM0 -score of 0.7364 for `gang' class and 0.9690 for `non-gang' class. The baseline Model(2) which only uses profiles that contain each and every feature type has a INLINEFORM1 -score of 0.7755 for `gang' class and INLINEFORM2 -score of 0.9720 for `non-gang' class.
Vector sum is one of the basic operations we can perform on word embedding vectors. The random forest classifier performs the best among vector sum-based classifiers where logistic regression and SVM classifiers also perform comparatively well ( INLINEFORM0 ). Using vector mean ( INLINEFORM1 ) improves all classifier results and SVM classifier trained on the mean of word embeddings achieves very close results to the baseline Model(2). Multiplying vector sum with corresponding word counts for each word in word embeddings degrades the classifier accuracy for correctly identifying the positive class ( INLINEFORM2 ). When we multiply words by their corresponding INLINEFORM3 - INLINEFORM4 values before taking the vector sum, we again observe an increase in the classifiers' accuracy ( INLINEFORM5 ). We achieve the best performance by averaging the vector sum weighted by term frequency ( INLINEFORM6 ). Here we multiply the mean of the word embeddings by count of each word, which beats all other word embeddings-based models and the two baselines. In this setting, logistic regression classifier trained on word embeddings performs the best with a INLINEFORM7 -score of 0.7835. This is a 6.39% improvement in performance when compared to the baseline Model(1) and a 1.03% improvement in performance when compared to baseline Model(2). Overall, out of the five vector operations that we used to train machine learning classifiers, four gave us classifier models that beat baseline Model(1) and two vector based operations gave us classifier models that either achieved very similar results to baseline Model(2) or beat it. This evaluation demonstrates the promise of using pre-trained word embeddings to boost the accuracy of supervised learning algorithms for Twitter gang member profile classification.
Conclusion and Future Work
This paper presented a word embeddings-based approach to address the problem of automatically identifying gang member profiles on Twitter. Using a Twitter user dataset that consist of 400 gang member and 2,865 non gang member profiles, we trained word embedding models based on users' tweets, profile descriptions, emoji, images, and videos shared on Twitter (textual features extracted from images, and videos). We then use the pre-trained word embedding models to train supervised machine learning classifiers, which showed superior performance when compared to the state-of-the-art baseline models reported in the literature. We plan to further extend our work by building our own image classification system specifically designed to identify images commonly shared by gang members such as guns, gang hand signs, stacks of cash and drugs. We would also like to experiment with automatically building dictionaries that contain gang names and gang-related slang using crowd-sourced gang-related knowledge-bases such as HipWiki. We also want to experiment with using such knowledge-bases to train word embeddings to understand whether having access to gang-related knowledge could boost the performance of our models. Finally, we would like to study how we can further use social networks of known gang members to identify new gang member profiles on Twitter. | Logistic Regression (LR), Random Forest (RF), Support Vector Machines (SVM) |
663b36f99ad2422f4d3a8c6398ebf55ceab7770d | 663b36f99ad2422f4d3a8c6398ebf55ceab7770d_0 | Q: How in YouTube content translated into a vector format?
Text: Introduction
Street gangs are defined as “a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise” BIBREF0 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Today, over 1.4 million people, belonging to more than 33,000 gangs, are active in the United States BIBREF2 , of which 88% identify themselves as being members of a street gang. They are also active users of social media BIBREF2 ; according to 2007 National Assessment Center's survey of gang members, 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF3 . More recent studies report approximately 45% of gang members participate in online offending activities such as threatening, harassing individuals, posting violent videos or attacking someone on the street for something they said online BIBREF4 , BIBREF5 . They confirm that gang members use social media to express themselves in ways similar to their offline behavior on the streets BIBREF6 .
Despite its public nature, gang members post on social media without fear of consequences because there are only few tools law enforcement can presently use to surveil social media BIBREF7 . For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF8 . From offline clues, the officers monitor just a selected set of social media accounts which are manually discovered and related to a specific investigation. Thus, developing tools to identify gang member profiles on social media is an important step in the direction of using machine intelligence to fight crime.
To help agencies monitor gang activity on social media, our past work investigated how features from Twitter profiles, including profile text, profile images, tweet text, emjoi use, and their links to YouTube, may be used to reliably find gang member profiles BIBREF9 . The diverse set of features, chosen to combat the fact that gang members often use local terms and hashtags in their posts, offered encouraging results. In this paper, we report our experience in integrating deep learning into our gang member profile classifier. Specifically, we investigate the effect of translating the features into a vector space using word embeddings BIBREF10 . This idea is motivated by the recent success of word embeddings-based methods to learn syntactic and semantic structures automatically when provided with large datasets. A dataset of over 3,000 gang and non-gang member profiles that we previously curated is used to train the word embeddings. We show that pre-trained word embeddings improve the machine learning models and help us obtain an INLINEFORM0 -score of INLINEFORM1 on gang member profiles (a 6.39% improvement in INLINEFORM2 -score compared to the baseline models which were not trained using word embeddings).
This paper is organized as follows. Section SECREF2 discusses the related literature and frames how this work differs from other related works. Section SECREF3 discusses our approach based on word embeddings to identify gang member profiles. Section SECREF4 reports on the evaluation of the proposed approach and the evaluation results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Researchers have begun investigating the gang members' use of social media and have noticed the importance of identifying gang members' Twitter profiles a priori BIBREF6 , BIBREF7 . Before analyzing any textual context retrieved from their social media posts, knowing that a post has originated from a gang member could help systems to better understand the message conveyed by that post. Wijeratne et al. developed a framework to analyze what gang members post on social media BIBREF7 . Their framework could only extract social media posts from self identified gang members by searching for pre-identified gang names in a user's Twitter profile description. Patton et al. developed a method to collect tweets from a group of gang members operating in Detroit, MI BIBREF11 . However, their approach required the gang members' Twitter profile names to be known beforehand, and data collection was localized to a single city in the country. These studies investigated a small set of manually curated gang member profiles, often from a small geographic area that may bias their findings.
In our previous work BIBREF9 , we curated what may be the largest set of gang member profiles to study how gang member Twitter profiles can be automatically identified based on the content they share online. A data collection process involving location neutral keywords used by gang members, with an expanded search of their retweet, friends and follower networks, led to identifying 400 authentic gang member profiles on Twitter. Our study discovered that the text in their tweets and profile descriptions, their emoji use, their profile images, and music interests embodied by links to YouTube music videos, can help a classifier distinguish between gang and non-gang member profiles. While a very promising INLINEFORM0 measure with low false positive rate was achieved, we hypothesize that the diverse kinds and the multitude of features employed (e.g. unigrams of tweet text) could be amenable to an improved representation for classification. We thus explore the possibility of mapping these features into a considerably smaller feature space through the use of word embeddings.
Previous research has shown word embeddings-based methods can significantly improve short text classification BIBREF12 , BIBREF13 . For example, Lilleberget et al. showed that word embeddings weighted by INLINEFORM0 - INLINEFORM1 outperforms other variants of word embedding models discussed in BIBREF13 , after training word embedding models on over 18,000 newsgroup posts. Wang et al. showed that short text categorization can be improved by word embeddings with the help of a neural network model that feeds semantic cliques learned over word embeddings in to a convolutions neural network BIBREF12 . We believe our corpus of gang and non-gang member tweets, with nearly 64.6 million word tokens, could act as a rich resource to train word embeddings for distinguishing gang and non-gang member Twitter users. Our investigation differs from other word embeddings-based text classification systems such as BIBREF12 , BIBREF13 due to the fact that we use multiple feature types including emojis in tweets and image tags extracted from Twitter profile and cover images in our classification task.
Word Embeddings
A word embedding model is a neural network that learns rich representations of words in a text corpus. It takes data from a large, INLINEFORM0 -dimensional `word space' (where INLINEFORM1 is the number of unique words in a corpus) and learns a transformation of the data into a lower INLINEFORM2 -dimensional space of real numbers. This transformation is developed in a way that similarities between the INLINEFORM3 -dimensional vector representation of two words reflects semantic relationships among the words themselves. These semantics are not captured by typical bag-of-words or INLINEFORM4 -gram models for classification tasks on text data BIBREF14 , BIBREF10 .
Word embeddings have led to the state-of-the-art results in many sequential learning tasks BIBREF15 . In fact, word embedding learning is an important step for many statistical language modeling tasks in text processing systems. Bengio et al. were the first ones to introduce the idea of learning a distributed representation for words over a text corpus BIBREF16 . They learned representations for each word in the text corpus using a neural network model that modeled the joint probability function of word sequences in terms of the feature vectors of the words in the sequence. Mikolov et al. showed that simple algebraic operations can be performed on word embeddings learned over a text corpus, which leads to findings such as the word embedding vector of the word “King” INLINEFORM0 the word embedding vectors of “Man” INLINEFORM1 “Woman” would results in a word embedding vector that is closest to the word embedding vector of the word “Queen” BIBREF14 . Recent successes in using word embeddings to improve text classification for short text BIBREF12 , BIBREF13 , encouraged us to explore how they can be used to improve gang and non-gang member Twitter profile classification.
Word embeddings can be performed under different neural network architectures; two popular ones are the Continuous Bag-of-Words (CBOW) and Continuous Skip-gram (Skip-gram) models BIBREF17 . The CBOW model learns a neural network such that given a set of context words surrounding a target word, it predict a target word. The Skip-gram model differs by predicting context words given a target word and by capturing the ordering of word occurrences. Recent improvements to Skip-gram model make it better able to handle less frequent words, especially when negative sampling is used BIBREF10 .
Features considered
Gang member tweets and profile descriptions tend to have few textual indicators that demonstrate their gang affiliations or their tweets/profile text may carry acronyms which can only be deciphered by others involved in gang culture BIBREF9 . These gang-related terms are often local to gangs operating in neighborhoods and change rapidly when they form new gangs. Consequently, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, we use heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music preferences and affinity. In this section, we briefly discuss the feature types and their broad differences in gang and non-gang member profiles. An in-depth explanation of these feature selection can be found in BIBREF9 .
Tweet text: In our previous work, we observed that gang members use curse words nearly five times more than the average curse words use on Twitter BIBREF9 . Further, we noticed that gang members mainly use Twitter to discuss drugs and money using terms such as smoke, high, hit, money, got, and need while non-gang members mainly discuss their feelings using terms such as new, like, love, know, want, and look.
Twitter profile description: We found gang member profile descriptions to be rife with curse words (nigga, fuck, and shit) while non-gang members use words related to their feelings or interests (love, life, music, and book). We noticed that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members as about INLINEFORM0 of gang member Twitter profiles used terms such as rip and free.
Emoji features: We found that the fuel pump emoji was the most frequently used emoji by gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji was the second most frequently used emoji, which is often used with the police cop emoji in an `emoji chain' to express their hatred towards law enforcement officers. The money bag emoji, money with wings emoji, unlock emoji, and a variety of the angry face emoji such as the devil face emoji and imp emoji were also common in gang members' but not in non-gang members' tweets.
Twitter profile and cover images: We noticed that gang members often pose holding or pointing weapons, seen in a group fashion which displays a gangster culture, show off graffiti, hand signs, tattoos, and bulk cash in their profile and cover images. We used Clarifai web service to tag the profile and cover images of the Twitter users in our dataset and used the image tags returned by Clarifai API to train word embeddings. Tags such as trigger, bullet, and worship were unique for gang member profiles while non-gang member images had unique tags such as beach, seashore, dawn, wildlife, sand, and pet.
YouTube videos: We found that 51.25% of the gang members in our dataset have a tweet that links to a YouTube video. Further, we found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre BIBREF9 . Moreover, we found that eight YouTube links are shared on average by a gang member. The top 5 terms used in YouTube videos shared by gang members were shit, like, nigga, fuck, and lil while like, love, peopl, song, and get were the top 5 terms in non-gang members' video data.
Classification approach
Figure FIGREF11 gives an overview of the steps to learn word embeddings and to integrate them into a classifier. We first convert any non-textual features such as emoji and profile images into textual features. We use Emoji for Python and Clarifai services, respectively, to convert emoji and images into text. Prior to training the word embeddings, we remove all the seed words used to find gang member profiles and stopwords, and perform stemming across all tweets and profile descriptions. We then feed all the training data (word INLINEFORM0 in Figure FIGREF11 ) we collected from our Twitter dataset to Word2Vec tool and train it using a Skip-gram model with negative sampling. When training the Skip-gram model, we set the negative sampling rate to 10 sample words, which seems to work well with medium-sized datasets BIBREF10 . We set the context word window to be 5, so that it will consider 5 words to left and right of the target word (words INLINEFORM1 to INLINEFORM2 in Figure FIGREF11 ). This setting is suitable for sentences where average sentence length is less than 11 words, as is the case in tweets BIBREF18 . We ignore the words that occur less than 5 times in our training corpus.
We investigated how well the local language has been captured by the word embedding models we trained. We used the `most similar' functionality offered by Word2Vec tool to understand what the model has learned about few gang-related slang terms which are specific to Chicago area. For example, we analyzed the ten most similar words learned by the word embedding model for the term BDK (Black Desciples Killers). We noticed that out of the 10 most similar words, five were names of local Chicago gangs, which are rivals of the Black Disciples Gang, two were different syntactic variations of BDK (bdkk, bdkkk) and the other three were different syntactic variations of GDK (gdk, gdkk, gdkkk). GDK is a local gang slang for `Gangster Disciples Killer' which is used by rivals of Gangster Disciples gang to show their hatred towards them. We found similar results for the term GDK. Out of the ten most similar words, six were showing hatred towards six different Gangster Disciples gangs that operate in Chicago area. We believe that those who used the term GDK to show their hatred towards Gangster Disciples gangs might be also having rivalry with the six gangs we found.
We obtain word vectors of size 300 from the learned word embeddings. To represent a Twitter profile, we retrieve word vectors for all the words that appear in a particular profile including the words appear in tweets, profile description, words extracted from emoji, cover and profile images converted to textual formats, and words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline. Those word vectors are combined to compute the final feature vector for the Twitter profile. To combine the word vectors, we consider five different methods. Letting the size of a word vector be INLINEFORM0 , for a Twitter profile INLINEFORM1 with INLINEFORM2 unique words and the vector of the INLINEFORM3 word in INLINEFORM4 denoted by INLINEFORM5 , we compute the feature vector for the Twitter profile INLINEFORM6 by:
Sum of word embeddings INLINEFORM0 . This is the sum the word embedding vectors obtained for all words in a Twitter profile: INLINEFORM1
Mean of word embeddings INLINEFORM0 . This is the mean of the word embedding vectors of all words found in a Twitter profile: INLINEFORM1
Sum of word embeddings weighted by term frequency INLINEFORM0 . This is each word embedding vector multiplied by the word's frequency for the Twitter profile: INLINEFORM1
where INLINEFORM0 is the term frequency for the INLINEFORM1 word in profile INLINEFORM2 .
Sum of word embeddings weighted by INLINEFORM0 - INLINEFORM1 INLINEFORM2 . This is each word vector multiplied by the word's INLINEFORM3 - INLINEFORM4 for the Twitter profile: INLINEFORM5
where INLINEFORM0 is the INLINEFORM1 - INLINEFORM2 value for the INLINEFORM3 word in profile INLINEFORM4 .
Mean of word embeddings weighted by term frequency INLINEFORM0 . This is the mean of the word embedding vectors weighted by term frequency: INLINEFORM1
Evaluation
We evaluate the performance of using word embeddings to discover gang member profiles on Twitter. We first discuss the dataset, learning algorithms and baseline comparison models used in the experiments. Then we discuss the 10-fold cross validation experiments and the evaluation matrices used. Finally we present the results of the experiments.
Evaluation setup
We consider a dataset of curated gang and non-gang members' Twitter profiles collected from our previous work BIBREF9 . It was developed by querying the Followerwonk Web service API with location-neutral seed words known to be used by gang members across the U.S. in their Twitter profiles. The dataset was further expanded by examining the friends, follower, and retweet networks of the gang member profiles found by searching for seed words. Specific details about our data curation procedure are discussed in BIBREF9 . Ultimately, this dataset consists of 400 gang member profiles and 2,865 non-gang member profiles. For each user profile, we collected up to most recent 3,200 tweets from their Twitter timelines, profile description text, profile and cover images, and the comments and video descriptions for every YouTube video shared by them. Table 1 provides statistics about the number of words found in each type of feature in the dataset. It includes a total of 821,412 tweets from gang members and 7,238,758 tweets from non-gang members.
To build the classifiers we used three different learning algorithms, namely Logistic Regression (LR), Random Forest (RF), and Support Vector Machines (SVM). We used version 0.17.1 of scikit-learn machine learning library for Python to implement the classifiers. An open source tool of Python, Gensim BIBREF19 was used to generate the word embeddings. We compare our results with the two best performing systems reported in BIBREF9 which are the two state-of-the-art models for identifying gang members in Twitter. Both baseline models are built from a random forest classifier trained over term frequencies for unigrams in tweet text, emoji, profile data, YouTube video data and image tags. Baseline Model(1) considers all 3,285 gang and non-gang member profiles in our dataset. Baseline Model(2) considers all Twitter profiles that contain every feature type discussed in Section SECREF2 . Because a Twitter profile may not have every feature type, baseline Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. However, we compare our results to both baseline models and report the improvements.
10-fold cross validation
We conducted 10-fold cross validation experiments to evaluate the performance of our models. We used all Twitter profiles in the dataset to conduct experiments on the five methods we used to combine word embedding vectors. For each of the five vector combination methods (as mentioned in Section SECREF9 ), we trained classifiers using each learning algorithm we considered. In each fold, the training set was used to generate the word vectors, which were then used to compute features for both the training set and test set. For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' (positive) and `non-gang' (negative) classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the `gang' and `non-gang' classes separately because of the class imbalance in the dataset.
Experimental results
Table TABREF22 presents 10-fold cross validation results for the baseline models (first and second rows) and our word embeddings-based models (from third row to seventh row). As mentioned earlier both baseline models use a random forest classifier trained on term frequencies of unigram features extracted from all feature types. The two baseline models only differs on the training data filtering method used, which is based on the availability of features in the training dataset as described in BIBREF9 . The baseline Model(1) uses all profiles in the dataset and has a INLINEFORM0 -score of 0.7364 for `gang' class and 0.9690 for `non-gang' class. The baseline Model(2) which only uses profiles that contain each and every feature type has a INLINEFORM1 -score of 0.7755 for `gang' class and INLINEFORM2 -score of 0.9720 for `non-gang' class.
Vector sum is one of the basic operations we can perform on word embedding vectors. The random forest classifier performs the best among vector sum-based classifiers where logistic regression and SVM classifiers also perform comparatively well ( INLINEFORM0 ). Using vector mean ( INLINEFORM1 ) improves all classifier results and SVM classifier trained on the mean of word embeddings achieves very close results to the baseline Model(2). Multiplying vector sum with corresponding word counts for each word in word embeddings degrades the classifier accuracy for correctly identifying the positive class ( INLINEFORM2 ). When we multiply words by their corresponding INLINEFORM3 - INLINEFORM4 values before taking the vector sum, we again observe an increase in the classifiers' accuracy ( INLINEFORM5 ). We achieve the best performance by averaging the vector sum weighted by term frequency ( INLINEFORM6 ). Here we multiply the mean of the word embeddings by count of each word, which beats all other word embeddings-based models and the two baselines. In this setting, logistic regression classifier trained on word embeddings performs the best with a INLINEFORM7 -score of 0.7835. This is a 6.39% improvement in performance when compared to the baseline Model(1) and a 1.03% improvement in performance when compared to baseline Model(2). Overall, out of the five vector operations that we used to train machine learning classifiers, four gave us classifier models that beat baseline Model(1) and two vector based operations gave us classifier models that either achieved very similar results to baseline Model(2) or beat it. This evaluation demonstrates the promise of using pre-trained word embeddings to boost the accuracy of supervised learning algorithms for Twitter gang member profile classification.
Conclusion and Future Work
This paper presented a word embeddings-based approach to address the problem of automatically identifying gang member profiles on Twitter. Using a Twitter user dataset that consist of 400 gang member and 2,865 non gang member profiles, we trained word embedding models based on users' tweets, profile descriptions, emoji, images, and videos shared on Twitter (textual features extracted from images, and videos). We then use the pre-trained word embedding models to train supervised machine learning classifiers, which showed superior performance when compared to the state-of-the-art baseline models reported in the literature. We plan to further extend our work by building our own image classification system specifically designed to identify images commonly shared by gang members such as guns, gang hand signs, stacks of cash and drugs. We would also like to experiment with automatically building dictionaries that contain gang names and gang-related slang using crowd-sourced gang-related knowledge-bases such as HipWiki. We also want to experiment with using such knowledge-bases to train word embeddings to understand whether having access to gang-related knowledge could boost the performance of our models. Finally, we would like to study how we can further use social networks of known gang members to identify new gang member profiles on Twitter. | words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline |
be595b2017545b0359db6abf4914a155bdd10d23 | be595b2017545b0359db6abf4914a155bdd10d23_0 | Q: How is the ground truth of gang membership established in this dataset?
Text: Introduction
Street gangs are defined as “a coalition of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively to conduct illegal activity and to control a territory, facility, or enterprise” BIBREF0 . They promote criminal activities such as drug trafficking, assault, robbery, and threatening or intimidating a neighborhood BIBREF1 . Today, over 1.4 million people, belonging to more than 33,000 gangs, are active in the United States BIBREF2 , of which 88% identify themselves as being members of a street gang. They are also active users of social media BIBREF2 ; according to 2007 National Assessment Center's survey of gang members, 25% of individuals in gangs use the Internet for at least 4 hours a week BIBREF3 . More recent studies report approximately 45% of gang members participate in online offending activities such as threatening, harassing individuals, posting violent videos or attacking someone on the street for something they said online BIBREF4 , BIBREF5 . They confirm that gang members use social media to express themselves in ways similar to their offline behavior on the streets BIBREF6 .
Despite its public nature, gang members post on social media without fear of consequences because there are only few tools law enforcement can presently use to surveil social media BIBREF7 . For example, the New York City police department employs over 300 detectives to combat teen violence triggered by insults, dares, and threats exchanged on social media, and the Toronto police department teaches officers about the use of social media in investigations BIBREF8 . From offline clues, the officers monitor just a selected set of social media accounts which are manually discovered and related to a specific investigation. Thus, developing tools to identify gang member profiles on social media is an important step in the direction of using machine intelligence to fight crime.
To help agencies monitor gang activity on social media, our past work investigated how features from Twitter profiles, including profile text, profile images, tweet text, emjoi use, and their links to YouTube, may be used to reliably find gang member profiles BIBREF9 . The diverse set of features, chosen to combat the fact that gang members often use local terms and hashtags in their posts, offered encouraging results. In this paper, we report our experience in integrating deep learning into our gang member profile classifier. Specifically, we investigate the effect of translating the features into a vector space using word embeddings BIBREF10 . This idea is motivated by the recent success of word embeddings-based methods to learn syntactic and semantic structures automatically when provided with large datasets. A dataset of over 3,000 gang and non-gang member profiles that we previously curated is used to train the word embeddings. We show that pre-trained word embeddings improve the machine learning models and help us obtain an INLINEFORM0 -score of INLINEFORM1 on gang member profiles (a 6.39% improvement in INLINEFORM2 -score compared to the baseline models which were not trained using word embeddings).
This paper is organized as follows. Section SECREF2 discusses the related literature and frames how this work differs from other related works. Section SECREF3 discusses our approach based on word embeddings to identify gang member profiles. Section SECREF4 reports on the evaluation of the proposed approach and the evaluation results in detail. Section SECREF5 concludes the work reported while discussing the future work planned.
Related Work
Researchers have begun investigating the gang members' use of social media and have noticed the importance of identifying gang members' Twitter profiles a priori BIBREF6 , BIBREF7 . Before analyzing any textual context retrieved from their social media posts, knowing that a post has originated from a gang member could help systems to better understand the message conveyed by that post. Wijeratne et al. developed a framework to analyze what gang members post on social media BIBREF7 . Their framework could only extract social media posts from self identified gang members by searching for pre-identified gang names in a user's Twitter profile description. Patton et al. developed a method to collect tweets from a group of gang members operating in Detroit, MI BIBREF11 . However, their approach required the gang members' Twitter profile names to be known beforehand, and data collection was localized to a single city in the country. These studies investigated a small set of manually curated gang member profiles, often from a small geographic area that may bias their findings.
In our previous work BIBREF9 , we curated what may be the largest set of gang member profiles to study how gang member Twitter profiles can be automatically identified based on the content they share online. A data collection process involving location neutral keywords used by gang members, with an expanded search of their retweet, friends and follower networks, led to identifying 400 authentic gang member profiles on Twitter. Our study discovered that the text in their tweets and profile descriptions, their emoji use, their profile images, and music interests embodied by links to YouTube music videos, can help a classifier distinguish between gang and non-gang member profiles. While a very promising INLINEFORM0 measure with low false positive rate was achieved, we hypothesize that the diverse kinds and the multitude of features employed (e.g. unigrams of tweet text) could be amenable to an improved representation for classification. We thus explore the possibility of mapping these features into a considerably smaller feature space through the use of word embeddings.
Previous research has shown word embeddings-based methods can significantly improve short text classification BIBREF12 , BIBREF13 . For example, Lilleberget et al. showed that word embeddings weighted by INLINEFORM0 - INLINEFORM1 outperforms other variants of word embedding models discussed in BIBREF13 , after training word embedding models on over 18,000 newsgroup posts. Wang et al. showed that short text categorization can be improved by word embeddings with the help of a neural network model that feeds semantic cliques learned over word embeddings in to a convolutions neural network BIBREF12 . We believe our corpus of gang and non-gang member tweets, with nearly 64.6 million word tokens, could act as a rich resource to train word embeddings for distinguishing gang and non-gang member Twitter users. Our investigation differs from other word embeddings-based text classification systems such as BIBREF12 , BIBREF13 due to the fact that we use multiple feature types including emojis in tweets and image tags extracted from Twitter profile and cover images in our classification task.
Word Embeddings
A word embedding model is a neural network that learns rich representations of words in a text corpus. It takes data from a large, INLINEFORM0 -dimensional `word space' (where INLINEFORM1 is the number of unique words in a corpus) and learns a transformation of the data into a lower INLINEFORM2 -dimensional space of real numbers. This transformation is developed in a way that similarities between the INLINEFORM3 -dimensional vector representation of two words reflects semantic relationships among the words themselves. These semantics are not captured by typical bag-of-words or INLINEFORM4 -gram models for classification tasks on text data BIBREF14 , BIBREF10 .
Word embeddings have led to the state-of-the-art results in many sequential learning tasks BIBREF15 . In fact, word embedding learning is an important step for many statistical language modeling tasks in text processing systems. Bengio et al. were the first ones to introduce the idea of learning a distributed representation for words over a text corpus BIBREF16 . They learned representations for each word in the text corpus using a neural network model that modeled the joint probability function of word sequences in terms of the feature vectors of the words in the sequence. Mikolov et al. showed that simple algebraic operations can be performed on word embeddings learned over a text corpus, which leads to findings such as the word embedding vector of the word “King” INLINEFORM0 the word embedding vectors of “Man” INLINEFORM1 “Woman” would results in a word embedding vector that is closest to the word embedding vector of the word “Queen” BIBREF14 . Recent successes in using word embeddings to improve text classification for short text BIBREF12 , BIBREF13 , encouraged us to explore how they can be used to improve gang and non-gang member Twitter profile classification.
Word embeddings can be performed under different neural network architectures; two popular ones are the Continuous Bag-of-Words (CBOW) and Continuous Skip-gram (Skip-gram) models BIBREF17 . The CBOW model learns a neural network such that given a set of context words surrounding a target word, it predict a target word. The Skip-gram model differs by predicting context words given a target word and by capturing the ordering of word occurrences. Recent improvements to Skip-gram model make it better able to handle less frequent words, especially when negative sampling is used BIBREF10 .
Features considered
Gang member tweets and profile descriptions tend to have few textual indicators that demonstrate their gang affiliations or their tweets/profile text may carry acronyms which can only be deciphered by others involved in gang culture BIBREF9 . These gang-related terms are often local to gangs operating in neighborhoods and change rapidly when they form new gangs. Consequently, building a database of keywords, phrases, and other identifiers to find gang members nationally is not feasible. Instead, we use heterogeneous sets of features derived not only from profile and tweet text but also from the emoji usage, profile images, and links to YouTube videos reflecting their music preferences and affinity. In this section, we briefly discuss the feature types and their broad differences in gang and non-gang member profiles. An in-depth explanation of these feature selection can be found in BIBREF9 .
Tweet text: In our previous work, we observed that gang members use curse words nearly five times more than the average curse words use on Twitter BIBREF9 . Further, we noticed that gang members mainly use Twitter to discuss drugs and money using terms such as smoke, high, hit, money, got, and need while non-gang members mainly discuss their feelings using terms such as new, like, love, know, want, and look.
Twitter profile description: We found gang member profile descriptions to be rife with curse words (nigga, fuck, and shit) while non-gang members use words related to their feelings or interests (love, life, music, and book). We noticed that gang members use their profile descriptions as a space to grieve for their fallen or incarcerated gang members as about INLINEFORM0 of gang member Twitter profiles used terms such as rip and free.
Emoji features: We found that the fuel pump emoji was the most frequently used emoji by gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji was the second most frequently used emoji, which is often used with the police cop emoji in an `emoji chain' to express their hatred towards law enforcement officers. The money bag emoji, money with wings emoji, unlock emoji, and a variety of the angry face emoji such as the devil face emoji and imp emoji were also common in gang members' but not in non-gang members' tweets.
Twitter profile and cover images: We noticed that gang members often pose holding or pointing weapons, seen in a group fashion which displays a gangster culture, show off graffiti, hand signs, tattoos, and bulk cash in their profile and cover images. We used Clarifai web service to tag the profile and cover images of the Twitter users in our dataset and used the image tags returned by Clarifai API to train word embeddings. Tags such as trigger, bullet, and worship were unique for gang member profiles while non-gang member images had unique tags such as beach, seashore, dawn, wildlife, sand, and pet.
YouTube videos: We found that 51.25% of the gang members in our dataset have a tweet that links to a YouTube video. Further, we found that 76.58% of the shared links are related to hip-hop music, gangster rap, and the culture that surrounds this music genre BIBREF9 . Moreover, we found that eight YouTube links are shared on average by a gang member. The top 5 terms used in YouTube videos shared by gang members were shit, like, nigga, fuck, and lil while like, love, peopl, song, and get were the top 5 terms in non-gang members' video data.
Classification approach
Figure FIGREF11 gives an overview of the steps to learn word embeddings and to integrate them into a classifier. We first convert any non-textual features such as emoji and profile images into textual features. We use Emoji for Python and Clarifai services, respectively, to convert emoji and images into text. Prior to training the word embeddings, we remove all the seed words used to find gang member profiles and stopwords, and perform stemming across all tweets and profile descriptions. We then feed all the training data (word INLINEFORM0 in Figure FIGREF11 ) we collected from our Twitter dataset to Word2Vec tool and train it using a Skip-gram model with negative sampling. When training the Skip-gram model, we set the negative sampling rate to 10 sample words, which seems to work well with medium-sized datasets BIBREF10 . We set the context word window to be 5, so that it will consider 5 words to left and right of the target word (words INLINEFORM1 to INLINEFORM2 in Figure FIGREF11 ). This setting is suitable for sentences where average sentence length is less than 11 words, as is the case in tweets BIBREF18 . We ignore the words that occur less than 5 times in our training corpus.
We investigated how well the local language has been captured by the word embedding models we trained. We used the `most similar' functionality offered by Word2Vec tool to understand what the model has learned about few gang-related slang terms which are specific to Chicago area. For example, we analyzed the ten most similar words learned by the word embedding model for the term BDK (Black Desciples Killers). We noticed that out of the 10 most similar words, five were names of local Chicago gangs, which are rivals of the Black Disciples Gang, two were different syntactic variations of BDK (bdkk, bdkkk) and the other three were different syntactic variations of GDK (gdk, gdkk, gdkkk). GDK is a local gang slang for `Gangster Disciples Killer' which is used by rivals of Gangster Disciples gang to show their hatred towards them. We found similar results for the term GDK. Out of the ten most similar words, six were showing hatred towards six different Gangster Disciples gangs that operate in Chicago area. We believe that those who used the term GDK to show their hatred towards Gangster Disciples gangs might be also having rivalry with the six gangs we found.
We obtain word vectors of size 300 from the learned word embeddings. To represent a Twitter profile, we retrieve word vectors for all the words that appear in a particular profile including the words appear in tweets, profile description, words extracted from emoji, cover and profile images converted to textual formats, and words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline. Those word vectors are combined to compute the final feature vector for the Twitter profile. To combine the word vectors, we consider five different methods. Letting the size of a word vector be INLINEFORM0 , for a Twitter profile INLINEFORM1 with INLINEFORM2 unique words and the vector of the INLINEFORM3 word in INLINEFORM4 denoted by INLINEFORM5 , we compute the feature vector for the Twitter profile INLINEFORM6 by:
Sum of word embeddings INLINEFORM0 . This is the sum the word embedding vectors obtained for all words in a Twitter profile: INLINEFORM1
Mean of word embeddings INLINEFORM0 . This is the mean of the word embedding vectors of all words found in a Twitter profile: INLINEFORM1
Sum of word embeddings weighted by term frequency INLINEFORM0 . This is each word embedding vector multiplied by the word's frequency for the Twitter profile: INLINEFORM1
where INLINEFORM0 is the term frequency for the INLINEFORM1 word in profile INLINEFORM2 .
Sum of word embeddings weighted by INLINEFORM0 - INLINEFORM1 INLINEFORM2 . This is each word vector multiplied by the word's INLINEFORM3 - INLINEFORM4 for the Twitter profile: INLINEFORM5
where INLINEFORM0 is the INLINEFORM1 - INLINEFORM2 value for the INLINEFORM3 word in profile INLINEFORM4 .
Mean of word embeddings weighted by term frequency INLINEFORM0 . This is the mean of the word embedding vectors weighted by term frequency: INLINEFORM1
Evaluation
We evaluate the performance of using word embeddings to discover gang member profiles on Twitter. We first discuss the dataset, learning algorithms and baseline comparison models used in the experiments. Then we discuss the 10-fold cross validation experiments and the evaluation matrices used. Finally we present the results of the experiments.
Evaluation setup
We consider a dataset of curated gang and non-gang members' Twitter profiles collected from our previous work BIBREF9 . It was developed by querying the Followerwonk Web service API with location-neutral seed words known to be used by gang members across the U.S. in their Twitter profiles. The dataset was further expanded by examining the friends, follower, and retweet networks of the gang member profiles found by searching for seed words. Specific details about our data curation procedure are discussed in BIBREF9 . Ultimately, this dataset consists of 400 gang member profiles and 2,865 non-gang member profiles. For each user profile, we collected up to most recent 3,200 tweets from their Twitter timelines, profile description text, profile and cover images, and the comments and video descriptions for every YouTube video shared by them. Table 1 provides statistics about the number of words found in each type of feature in the dataset. It includes a total of 821,412 tweets from gang members and 7,238,758 tweets from non-gang members.
To build the classifiers we used three different learning algorithms, namely Logistic Regression (LR), Random Forest (RF), and Support Vector Machines (SVM). We used version 0.17.1 of scikit-learn machine learning library for Python to implement the classifiers. An open source tool of Python, Gensim BIBREF19 was used to generate the word embeddings. We compare our results with the two best performing systems reported in BIBREF9 which are the two state-of-the-art models for identifying gang members in Twitter. Both baseline models are built from a random forest classifier trained over term frequencies for unigrams in tweet text, emoji, profile data, YouTube video data and image tags. Baseline Model(1) considers all 3,285 gang and non-gang member profiles in our dataset. Baseline Model(2) considers all Twitter profiles that contain every feature type discussed in Section SECREF2 . Because a Twitter profile may not have every feature type, baseline Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. However, we compare our results to both baseline models and report the improvements.
10-fold cross validation
We conducted 10-fold cross validation experiments to evaluate the performance of our models. We used all Twitter profiles in the dataset to conduct experiments on the five methods we used to combine word embedding vectors. For each of the five vector combination methods (as mentioned in Section SECREF9 ), we trained classifiers using each learning algorithm we considered. In each fold, the training set was used to generate the word vectors, which were then used to compute features for both the training set and test set. For each 10-fold cross validation experiment, we report three evaluation metrics for the `gang' (positive) and `non-gang' (negative) classes, namely, the Precision = INLINEFORM0 , Recall = INLINEFORM1 , and INLINEFORM2 -score = INLINEFORM3 , where INLINEFORM4 is the number of true positives, INLINEFORM5 is the number of false positives, INLINEFORM6 is the number of true negatives, and INLINEFORM7 is the number of false negatives. We report these metrics for the `gang' and `non-gang' classes separately because of the class imbalance in the dataset.
Experimental results
Table TABREF22 presents 10-fold cross validation results for the baseline models (first and second rows) and our word embeddings-based models (from third row to seventh row). As mentioned earlier both baseline models use a random forest classifier trained on term frequencies of unigram features extracted from all feature types. The two baseline models only differs on the training data filtering method used, which is based on the availability of features in the training dataset as described in BIBREF9 . The baseline Model(1) uses all profiles in the dataset and has a INLINEFORM0 -score of 0.7364 for `gang' class and 0.9690 for `non-gang' class. The baseline Model(2) which only uses profiles that contain each and every feature type has a INLINEFORM1 -score of 0.7755 for `gang' class and INLINEFORM2 -score of 0.9720 for `non-gang' class.
Vector sum is one of the basic operations we can perform on word embedding vectors. The random forest classifier performs the best among vector sum-based classifiers where logistic regression and SVM classifiers also perform comparatively well ( INLINEFORM0 ). Using vector mean ( INLINEFORM1 ) improves all classifier results and SVM classifier trained on the mean of word embeddings achieves very close results to the baseline Model(2). Multiplying vector sum with corresponding word counts for each word in word embeddings degrades the classifier accuracy for correctly identifying the positive class ( INLINEFORM2 ). When we multiply words by their corresponding INLINEFORM3 - INLINEFORM4 values before taking the vector sum, we again observe an increase in the classifiers' accuracy ( INLINEFORM5 ). We achieve the best performance by averaging the vector sum weighted by term frequency ( INLINEFORM6 ). Here we multiply the mean of the word embeddings by count of each word, which beats all other word embeddings-based models and the two baselines. In this setting, logistic regression classifier trained on word embeddings performs the best with a INLINEFORM7 -score of 0.7835. This is a 6.39% improvement in performance when compared to the baseline Model(1) and a 1.03% improvement in performance when compared to baseline Model(2). Overall, out of the five vector operations that we used to train machine learning classifiers, four gave us classifier models that beat baseline Model(1) and two vector based operations gave us classifier models that either achieved very similar results to baseline Model(2) or beat it. This evaluation demonstrates the promise of using pre-trained word embeddings to boost the accuracy of supervised learning algorithms for Twitter gang member profile classification.
Conclusion and Future Work
This paper presented a word embeddings-based approach to address the problem of automatically identifying gang member profiles on Twitter. Using a Twitter user dataset that consist of 400 gang member and 2,865 non gang member profiles, we trained word embedding models based on users' tweets, profile descriptions, emoji, images, and videos shared on Twitter (textual features extracted from images, and videos). We then use the pre-trained word embedding models to train supervised machine learning classifiers, which showed superior performance when compared to the state-of-the-art baseline models reported in the literature. We plan to further extend our work by building our own image classification system specifically designed to identify images commonly shared by gang members such as guns, gang hand signs, stacks of cash and drugs. We would also like to experiment with automatically building dictionaries that contain gang names and gang-related slang using crowd-sourced gang-related knowledge-bases such as HipWiki. We also want to experiment with using such knowledge-bases to train word embeddings to understand whether having access to gang-related knowledge could boost the performance of our models. Finally, we would like to study how we can further use social networks of known gang members to identify new gang member profiles on Twitter. | text in their tweets and profile descriptions, their emoji use, their profile images, and music interests embodied by links to YouTube music videos, can help a classifier distinguish between gang and non-gang member profiles |
79b174d20ea5dd4f35e25c9425fb97f40e27cd6f | 79b174d20ea5dd4f35e25c9425fb97f40e27cd6f_0 | Q: Do they evaluate ablated versions of their CNN+RNN model?
Text: Introduction
While NER tasks across domains share similar problems of ambiguous abbreviations, homonyms, and other entity variations, the domain of biomedical text poses some unique challenges. While, in principle, there is a known set of biomedical entities (e.g., all known proteins), there is a surprising amount of variation for any given entity. For example, PPCA, C4 PEPC, C4 PEPCase, and Photosynthetic PEPCase all refer to the same entity. Additionally, certain entities such as proteins and genes can naturally span less than a “word" (e.g., HA and APG12 are separate proteins in pHA-APG12). Most state-of-the-art NER methods tag entities at the “word" level, and rely on pre- or post-processing rules to extract subword entities. Our goal is to develop a subword approach that does not rely on ad hoc processing steps.
To that end, we introduce a novel subword approach to identifying named entities. Our decision to work with input features and output tags at the byte level instead of the character level is because biomedical datasets typically provide byte offset annotations; however, our methods may also be applied to character-level models. In this paper, we refer to “subword models” as models that take as input a sequence of subwords (e.g., bytes) and output a corresponding sequence of subword tags (e.g., one tag per byte). Our focus is the effects of different subword features on identifying named entities in various biomedical NER datasets, which is especially useful for entities that are arguably more naturally annotated at the subword level.
Related Work
State-of-the-art neural NER techniques developed in recent years use a combination of neural networks (NN) and Conditional Random Fields (CRFs) to achieve high precision and recall of named entities. These techniques pass word and character embeddings to a bi-directional long short term memory (BLSTM) layer, which may be followed by a CRF layer BIBREF0 , BIBREF1 , BIBREF2 . These state-of-the-art techniques have also been successfully applied to biomedical datasets BIBREF3 , BIBREF4 . Although these techniques use “subword” features such as character embeddings, these models take as input a sequence of words and output a sequence of word tags, and are thus different from what we refer to as subword models in this paper. We build upon state-of-the-art neural techniques to evaluate models that take subword input features and produce corresponding subword output tags.
Subword models have mostly been developed in the context of multilingual datasets BIBREF5 , machine translation BIBREF6 , and processing for character-based languages BIBREF7 . BIBREF8 develop a model that tags sequences of bytes, though they ultimately relies on word boundaries to determine appropriate tags. BIBREF9 use characters, phonemes, and bytes as subword features, and similarly tag entities at word-level boundaries.
Byte-pair encoding iteratively combines frequent characters to build a “codebook" of character merge operations BIBREF10 , BIBREF11 . BIBREF12 also use BPE as features in their biological relation extraction and NER multi-task model, though they primarily focus on the former task.
Datasets
The first dataset, BioCreative VI Bio-ID, was introduced for Task 1 of BioCreative VI BIBREF13 , and consists of figure captions with annotations for six entity types. The Bio-ID dataset is the only dataset we experiment on that is annotated with byte offsets and contains raw text that has not been tokenized or converted into ASCII format. The second dataset, JNLPBA, is an annotated set of 2,404 biomedical abstracts BIBREF14 with annotations for five entity types. The third dataset, GENETAG BIBREF15 , is a collection of 20K sentences from MEDLINE that are annotated with proteins/genes. All samples in JNLPBA and GENETAG have been converted into ASCII format and are annotated at the word level.
For the byte NN models, we extract overlapping samples from the original training set to collect more data for our models to train with; these additional samples are to compensate for semantic information that is usually derived from pre-trained word embeddings. For the byte NN models, we extract training samples of 150 bytes from all datasets, long enough to encompass most of the tagged entities in the training data. To extract multiple samples from an original data sample, we right-shift by 75 bytes to collect the next 150-byte sample, thereby producing new samples with some overlapping content. We experiment with extracting samples using different stride lengths; a stride of 75 bytes generally improves model performance over using samples with no overlap, while also keeping the training time reasonable. The overlapping samples in the new training set are constrained to not start or end in the middle of an entity. We also break up samples in the development and test sets into 150-byte samples, again using a stride of 75 bytes to gather the next sample; we then follow BIBREF5 's method of using overlapping samples to capture possible entities that occur at the boundary of a sample and then re-combining samples to get rid of the overlapped portions.
For all other models, we pass in the original training, development, and test data without additional extraction of samples. The word NN model implementation we use takes the longest sample in the entire dataset and pads all samples to the max sample length.
Methods
We compare variations of the byte-level model with two word-level models for each dataset, and also include state-of-the-art results. For the NN models, we take sentences from 10% of the files in the Bio-ID dataset and JNLPBA dataset and 10% of the sentences in the GENETAG dataset to be the development sets. Our NN models learn to predict IOBES tag outputs for each byte. The IOBES and IOB schemes are similar in terms of effectiveness BIBREF16 ; BIBREF17 choose IOBES for expressiveness. Our NN model is relatively large, and we believe the amount of network parameters would allow us to use the more expressive scheme at a negligible cost.
Word CRF model
NERSuite is a CRF-based NER system that uses tokenization, lemmatization, POS-tagging, and chunking as features to tag tokens in a sequence. For each dataset, we train a NERSuite model on the training and development sets and tag each word in a sequence with an IOB tag. INLINEFORM0
Word-level NN model
BIBREF0 presents a state-of-the-art NER model that takes words as input and outputs IOBES tag predictions for each word. The BLSTM-CRF architecture uses character embeddings from convolutional neural network (CNN) layers concatenated with pre-trained word embeddings as features. For the Bio-ID dataset, we also use NERSuite's tokenizer to tokenize the data before passing it to the word-level NER model; this tokenization makes the model consistent with the tokenized JNLPBA and GENETAG datasets, even though the model thus relies on tokenization heuristics. We use BIBREF16 's word-level NER implementation.
Byte-level NN model
All of our byte-level model variations use a subset of four features: byte embeddings, BPE embeddings, pre-trained BPE embeddings, and pre-trained word embeddings. Byte embeddings and BPE embeddings are trained in conjunction with the model. Pre-trained word embeddings are trained on PubMed abstracts and PubMed Central full texts, and pre-trained BPE embeddings are trained only on the latter. All pre-trained embeddings are derived from a skip-gram model BIBREF18 . For each byte in the input sequence, we concatenate all feature embeddings for the byte. When BPE or word features span multiple bytes, the same feature is repeated across bytes. We do a simple whitespace tokenization to decide which words (and subsequently, subwords) to get embeddings for, to keep our model free of manually-crafted tokenization rules.
We find that our model is slightly better when we use BPE subword tokens generated from the full PubMed Central text versus from the training data. Additionally, pre-training embeddings for BPE subword tokens improves performance. Our initial experiments also show that when using BPE features in our model, running the BPE algorithm with 5K merge operations produces the best results; when using BPE embedding features, running the BPE algorithm with 50K merge operations and then generating 100-dimensional pre-trained BPE embeddings produces the best results. In our reported results, we always use the prior configurations. Unless otherwise stated, the byte NN model with byte embeddings and pre-trained BPE embeddings as features is the general “byte NN” model that we report results for. These features, along with the general byte CNN-BLSTM-CRF architecture, produce the best results.
The model starts with a stack of 20 CNN layers with residual connections between each layer. Following the pattern of effective neural NER architectures, the CNN stack is followed by a BLSTM layer and then a CRF layer, with hidden layers in between, as shown in Figure FIGREF12 . Our preliminary experiments indicate that a stack of CNNs and residual connections are necessary for our byte-level models to reach comparable performance with the word-level models.
We find that passing the pre-trained embeddings through the entire CNN-BLSTM-CRF network and also allowing the embeddings to be fine-tuned through the CNN layers improve the overall scores. Additional dropout BIBREF19 of embeddings and after each CNN layer further improves model performance. We also incorporate byte-dropout BIBREF5 , a technique that makes the model more robust to noise by randomly replacing a percentage of input bytes with a special DROP symbol.
For the byte NN model, we use dropout with a rate of 0.5, byte-dropout with a rate of 0.3, a learning rate of 0.0001 with Adam, and a mini batch size of 256 samples. The pre-trained word embeddings are 200-dimensional embeddings, and the pre-trained BPE embeddings are 100-dimensional embeddings. We use CNNs with 250 filters, a filter size of 7 bytes, a filter stride of 1 byte, and a ReLu activation function. The BLSTM layer also has 250 units and uses a tanh activation function. We run the byte NER models for 300 epochs. Non-pre-trained embeddings are initialized with a random uniform distribution [-0.05, 0.05].
BPE embeddings are 100-dimensional embeddings and are trained for 10 iterations using the skip gram model with a window size of 5 tokens.
The word NN model has a mini batch size of 32 samples, a clipnorm of 1, an output dropout of 0.5, a recurrent dropout of 0.5, a default learning rate of 0.002 with Nadam. It uses a CNN layer with 25 filters, a filter size of 7 characters, a filter stride of 1 character and a ReLu activation function to get character embeddings. Additional features (tokens and casing) have the default dimensions of 10. The BLSTM layer has 200 units and uses a tanh activation function. The model is run for 100 epochs without early stopping.
Results
Table TABREF15 compares the INLINEFORM0 scores of entities in the Bio-ID dataset tagged by our models. The byte NN model is better at finding cell type or lines, organisms or species, and protein or genes than the word NN model. We examine the fact that the word NN model has an INLINEFORM1 score 10% higher than that of other models for small molecules. Although a large number (55%) of the entities in the Bio-ID dataset are protein and genes, we find that the proportion of small molecules mistaken for protein or genes is higher than that of other entities mistaken for protein or genes. Looking at overall sequences of words may be necessary for more accurate identification of small molecules.
The best model submitted to BioCreative VI Track 1 uses a word-level CRF-based approach, along with preprocessing and heuristics BIBREF20 . The byte NN model outperforms all other models for protein or gene categories; importantly, the byte NN model is the only fully learned model that does not rely on heuristics for tokenization and other processing.
Tables TABREF15 and TABREF15 show that the byte-level model does not beat the word-level model on the JNLPBA and GENETAG datasets. Because the annotation of JNLPBA and GENETAG were explicitly constrained to words, we believe they do not serve as useful bases for our exploration of byte-level models. Our initial results on these datasets indicate that fully end-to-end byte-level models may be more suitable for entities whose spans do not align with word spans.
We also look at the effect of byte, BPE, and word features in Table TABREF16 . Previous works have shown that pre-trained word embeddings are important features for word-level NER models; we find that they are less useful for byte-level models. For a consistent feature set across bytes, contiguous bytes belonging to the same word have the same word feature. This repetition of information may diminish the effectiveness of word embeddings in the byte-level models. However, even though we repeat BPE features in the same way, table TABREF16 shows that BPE features are useful. Because the Bio-ID dataset is dominated by protein or genes, the byte NN model trained on byte and pre-trained BPE embeddings has a higher overall micro- INLINEFORM0 score than the byte NN model that only uses pre-trained BPE embeddings. With these results, we emphasize that BPE features are useful subword information for NER at the byte-level.
Conclusion
Our initial experiments on the byte-level NER models across datasets motivate these models as a useful end-to-end alternative for entities that naturally exist at the subword level. Further investigations into byte-level models could help facilitate more precise byte-level annotation schemes for the biomedical domain.
Acknowledgments
We would like to thank José-Luis Ambite, Scott Miller, Aram Galstyan, Ryan Gabbard, as well as all the anonymous reviewers for their invaluable advice regarding this work. | No |
21a96b328b43a568f9ba74cbc6d4689dbc4a3d7b | 21a96b328b43a568f9ba74cbc6d4689dbc4a3d7b_0 | Q: Do they single out a validation set from the fixed SRE training set?
Text: Introduction
Compared to previous years, the 2016 NIST speaker recognition evaluation (SRE) marked a major shift from English towards Austronesian and Chinese languages. The task like previous years is to perform speaker detection with the focus on telephone speech data recorded over a variety of handset types. The main challenges introduced in this evaluation are duration and language variability. The potential variation of languages addressed in this evaluation, recording environment, and variability of test segments duration influenced the design of our system. Our goal was to utilize recent advances in language normalization, domain adaptation, speech activity detection and session compensation techniques to mitigate the adverse bias introduced in this year's evaluation.
Over recent years, the i-vector representation of speech segments has been widely used by state-of-the-art speaker recognition systems BIBREF0 . The speaker recognition technology based on i-vectors currently dominates the research field due to its performance, low computational cost and the compatibility of i-vectors with machine learning techniques. This dominance is reflected by the recent NIST i-vector machine learning challenge BIBREF1 which was designed to find the most promising algorithmic approaches to speaker recognition specifically on the basis of i-vectors BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . The outstanding ability of DNN for frame alignment which has achieved remarkable performance in text-independent speaker recognition for English data BIBREF6 , BIBREF7 , failed to provide even comparable recognition performance to the traditional GMM. Therefore, we concentrated on the cepstral based GMM/i-vector system.
We outline in this paper the Intelligent Voice system, techniques and results obtained on the SRE 2016 development set that will mirror the evaluation condition as well as the timing report. Section SECREF2 describes the data used for the system training. The front-end and back-end processing of the system are presented in Sections SECREF3 and SECREF4 respectively. In Section SECREF5 , we describe experimental evaluation of the system on the SRE 2016 development set. Finally, we present a timing analysis of the system in Section SECREF6 .
Training Condition
The fixed training condition is used to build our speaker recognition system. Only conversational telephone speech data from datasets released through the linguistic data consortium (LDC) have been used, including NIST SRE 2004-2010 and the Switchboard corpora (Switchboard Cellular Parts I and II, Switchboard2 Phase I,II and III) for different steps of system training. A more detailed description of the data used in the system training is presented in Table TABREF1 . We have also included the unlabelled set of 2472 telephone calls from both minor (Cebuano and Mandarin) and major (Tagalog and Cantonese) languages provided by NIST in the system training. We will indicate when and how we used this set in the training in the following sections.
Front-End Processing
In this section we will provide a description of the main steps in front-end processing of our speaker recognition system including speech activity detection, acoustic and i-vector feature extraction.
Speech Activity Detection
The first stage of any speaker recognition system is to detect the speech content in an audio signal. An accurate speech activity detector (SAD) can improve the speaker recognition performance. Several techniques have been proposed for SAD, including unsupervised methods based on a thresholding signal energy, and supervised methods that train a speech/non-speech classifier such as support vector machines (SVM) BIBREF8 and Gaussian mixture models (GMMs) BIBREF9 . Hidden markov models (HMMs) BIBREF10 have also been successful. Recently, it has been shown that DNN systems achieve impressive improvement in performance especially in low signal to noise ratios (SNRs) BIBREF11 . In our work we have utilized a two-class DNN-HMM classifier to perform this task. The DNN-HMM hybrid configuration with cross-entropy as the objective function has been trained with the back-propagation algorithm. The softmax layer produces posterior probabilities for speech and non-speech which were then converted into log-likelihoods. Using 2-state HMMs corresponding to speech and non-speech, frame-wise decisions are made by Viterbi decoding. As input to the network, we fed 40-dimensional filter-bank features along with 7 frames from each side. The network has 6 hidden layers with 512 units each. The architecture of our DNN-HMM SAD is shown in Figure FIGREF3 . Approximately 100 hours of speech data from the Switchboard telephony data with word alignments as ground-truth were used to train our SAD. The DNN training in performed on an NVIDIA TITAN X GPU, using Kaldi software BIBREF12 . Evaluated on 50 hours of telephone speech data from the same database, our DNN-HMM SAD indicated a frame-level miss-classification (speech/non-speech) rate of 5.9% whereas an energy-based SAD did not perform better than 20%.
Acoustic Features
For acoustic features we have experimented with different configurations of cepstral features. We have used 39-dimensional PLP features and 60-dimensional MFCC features (including their first and second order derivatives) as acoustic features. Moreover, our experiments indicated that the combination of these two feature sets performs particularly well in score fusion. Both PLP and MFCC are extracted at 8kHz sample frequency using Kaldi BIBREF12 with 25 and 20 ms frame lengths, respectively, and a 10 ms overlap (other configurations are the same as Kaldi defaults). For each utterance, the features are centered using a short-term (3s window) cepstral mean and variance normalization (ST-CMVN). Finally, we employed our DNN-HMM speech activity detector (SAD) to drop non-speech frames.
i-Vector Features
Since the introduction of i-vectors in BIBREF0 , the speaker recognition community has seen a significant increase in recognition performance. i-Vectors are low-dimensional representations of Baum-Welch statistics obtained with respect to a GMM, referred to as universal background model (UBM), in a single subspace which includes all characteristics of speaker and inter-session variability, named total variability matrix BIBREF0 . We trained on each acoustic feature a full covariance, gender-independent UBM model with 2048 Gaussians followed by a 600-dimensional i-vector extractor to establish our MFCC- and PLP-based i-vector systems. The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor. The open-source Kaldi software has been used for all these processing steps BIBREF12 .
It has been shown that successive acoustic observation vectors tend to be highly correlated. This may be problematic for maximum a posteriori (MAP) estimation of i-vectors. To investigating this issue, scaling the zero and first order Baum-Welch statistics before presenting them to the i-vector extractor has been proposed. It turns out that a scale factor of 0.33 gives a slight edge, resulting in a better decision cost function BIBREF13 . This scaling factor has been performed in training the i-vector extractor as well as in the testing.
Back-End Processing
This section provides the steps performed in back-end processing of our speaker recognition system.
Nearest-neighbor Discriminant Analysis (NDA)
The nearest-neighbor discriminant analysis is a nonparametric discriminant analysis technique which was proposed in BIBREF14 , and recently used in speaker recognition BIBREF15 . The nonparametric within- and between-class scatter matrices INLINEFORM0 and INLINEFORM1 , respectively, are computed based on INLINEFORM2 nearest neighbor sample information. The NDA transform is then formed using eigenvectors of INLINEFORM3 . It has been shown that as the number of nearest neighbors INLINEFORM4 approaches the number of samples in each class, the NDA essentially becomes the LDA projection. Based on the finding in BIBREF15 , NDA outperformed LDA due to the ability in capturing the local structure and boundary information within and across different speakers. We applied a INLINEFORM5 NDA projection matrix computed using the 10 nearest sample information on centered i-vectors. The resulting dimensionality reduced i-vectors are then whitened using both the training data and the unlabelled development set.
Short-Duration Variability Compensation
The enrolment condition of the development set is supposed to provide at least 60 seconds of speech data for each target speaker. Nevertheless, our SAD indicates that the speech content is as low as 26 seconds in some cases. The test segments duration which ranges from 9 to 60 seconds of speech material can result in poor performance for lower duration segments. As indicated in Figure FIGREF8 , more than one third of the test segments have speech duration of less than 20 seconds. We have addressed this issue by proposing a short duration variability compensation method. The proposed method works by first extracting from each audio segment in the unlabelled development set, a partial excerpt of 10 seconds of speech material with random selection of the starting point (Figure FIGREF9 ). Each audio file in the unlabelled development set, with the extracted audio segment will result in two 400-dimensional i-vectors, one with at most 10 seconds of speech material. Considering each pair as one class, we computed a INLINEFORM0 LDA projection matrix to remove directions attributed to duration variability. Moreover, the projected i-vectors are also subjected to a within-class covariance normalization (WCCN) using the same class labels.
Language Normalization
Language-source normalization is an effective technique for reducing language dependency in the state-of-the-art i-vector/PLDA speaker recognition system BIBREF16 . It can be implemented by extending SN-LDA BIBREF17 in order to mitigate variations that separate languages. This can be accomplished by using the language label to identify different sources during training. Language Normalized-LDA (LN-LDA) utilizes a language-normalized within-speaker scatter matrix INLINEFORM0 which is estimated as the variability not captured by the between-speaker scatter matrix, DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the total scatter and normalized between-speaker scatter matrices respectively, and are formulated as follows: DISPLAYFORM0
where INLINEFORM0 is the total number of i-vectors and DISPLAYFORM0
where INLINEFORM0 is the number of languages in the training set, INLINEFORM1 is the number of speakers in language INLINEFORM2 , INLINEFORM3 is the mean of INLINEFORM4 i-vectors from speaker INLINEFORM5 and language INLINEFORM6 and finally INLINEFORM7 is the mean of all i-vectors in language INLINEFORM8 . We applied a INLINEFORM9 SN-LDA projection matrix to reduce the i-vector dimensions down to 300.
PLDA
Probabilistic Linear Discriminant Analysis (PLDA) provides a powerful mechanism to distinguish between-speaker variability, separating sources which characterizes speaker information, from all other sources of undesired variability that characterize distortions. Since i-vectors are assumed to be generated by some generative model, we can break it down into statistically independent speaker- and session-components with Gaussian distributions BIBREF18 , BIBREF19 . Although it has been shown that their distribution follow Student’s INLINEFORM0 rather than Gaussian BIBREF19 distributions, length normalizing the entire set of i-vectors as a pre-processing step can approximately Gaussianize their distributions BIBREF18 and as a result improve the performance of Gaussian PLDA to that of heavy-tailed PLDA BIBREF19 . A standard Gaussian PLDA assumes that an i-vector INLINEFORM1 , is modelled according to DISPLAYFORM0
where, INLINEFORM0 is the mean of i-vectors, the columns of matrix INLINEFORM1 contains the basis for the between-speaker subspace, the latent identity variable INLINEFORM2 denotes the speaker factor that represents the identity of the speaker and the residual INLINEFORM3 which is normally distributed with zero mean and full covariance matrix INLINEFORM4 , represents within-speaker variability.
For each acoustic feature we have trained two PLDA models. The first out-domain PLDA ( INLINEFORM0 , INLINEFORM1 ) is trained using the training set presented in Table TABREF1 , and the second in-domain PLDA ( INLINEFORM2 , INLINEFORM3 ) was trained using the unlabelled development set. Our efforts to cluster the development set (e.g using the out-domain PLDA) was not very successful as it sounds that almost all of them are uttered by different speakers. Therefore, each i-vector was considered to be uttered by one speaker. We also set the number of speaker factors to 200.
Domain Adaptation
Domain adaptation has gained considerable attention with the aim of compensating for cross-speech-source variability of in-domain and out-of-domain data. The framework presented in BIBREF20 for unsupervised adaptation of out-domain PLDA parameters resulted in better performance for in-domain data. Using in-domain and out-domain PLDA trained in Section SECREF14 , we interpolated their parameters as follow: DISPLAYFORM0
We chose INLINEFORM0 for making our submission.
Score Computation and Normalization
For the one-segment enrolment condition, the speaker model is the length normalized i-vector of that segment, however, for the three-segment enrolment condition, we simply used a length-normalized mean vector of the length-normalizated i-vectors as the speaker model. Each speaker model is tested against each test segment as in the trial list. For each two trial i-vectors INLINEFORM0 and INLINEFORM1 , the PLDA score is computed as DISPLAYFORM0
in which DISPLAYFORM0 DISPLAYFORM1
and INLINEFORM0 and INLINEFORM1 . It has been shown and proved in our experiments that score normalization can have a great impact on the performance of the recognition system. We used the symmetric s-norm proposed in BIBREF19 which normalizes the score INLINEFORM2 of the pair INLINEFORM3 using the formula DISPLAYFORM0
where the means INLINEFORM0 and standard deviations INLINEFORM1 are computed by matching INLINEFORM2 and INLINEFORM3 against the unlabelled set as the impostor speakers, respectively.
Quality Measure Function
It has been shown that there is a dependency between the value of the INLINEFORM0 threshold and the duration of both enrolment and test segments. Applying the quality measure function (QMF) BIBREF3 enabled us to compensate for the shift in the INLINEFORM1 threshold due to the differences in speech duration. We conducted some experiments to estimate the dependency between the INLINEFORM2 threshold shift on the duration of test segment and used the following QMF for PLDA verfication scores: DISPLAYFORM0
where INLINEFORM0 is the duration of the test segment in seconds.
Calibration
In the literature, the performance of speaker recognition is usually reported in terms of calibrated-insensitive equal error rate (EER) or the minimum decision cost function ( INLINEFORM0 ). However, in real applications of speaker recognition there is a need to present recognition results in terms of calibrated log-likelihood-ratios. We have utilized the BOSARIS Toolkit BIBREF21 for calibration of scores. INLINEFORM1 provides an ideal reference value for judging calibration. If INLINEFORM2 is minimized, then the system can be said to be well calibrated.
The choice of target probability ( INLINEFORM0 ) had a great impact on the performance of the calibration. However, we set INLINEFORM1 for our primary submission which performed the best on the development set. For our secondary submission INLINEFORM2 was used.
Results and Discussion
In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 . The first part of the table indicates the result obtained by the primary system. As can be seen, the fusion of MFCC and PLP (a simple sum of both MFCC and PLP scores) resulted in a relative improvement of almost 10%, as compared to MFCC alone, in terms of both INLINEFORM0 and INLINEFORM1 . In order to quantify the contribution of the different system components we have defined different scenarios. In scenario A, we have analysed the effect of using LDA instead of NDA. As can be seen from the results, LDA outperforms NDA in the case of PLP, however, in fusion we can see that NDA resulted in better performance in terms of the primary metric. In scenario B, we analysed the effect of using the short-duration compensation technique proposed in Section SECREF7 . Results indicate superior performance using this technique. In scenario C, we investigated the effects of language normalization on the performance of the system. If we replace LN-LDA with simple LDA, we can see performance degradation in MFCC as well as fusion, however, PLP seems not to be adversely affected. The effect of using QMF is also investigated in scenario D. Finally in scenario E, we can see the major improvement obtained through the use of the domain adaptation technique explained in Section SECREF16 . For our secondary submission, we incorporated a disjoint portion of the labelled development set (10 out of 20 speakers) in either LN-LDA and in-domain PLDA training. We evaluated the system on almost 6k out of 24k trials from the other portion to avoid any over-fitting, particularly important for the domain adaptation technique. This resulted in a relative improvement of 11% compared to the primary system in terms of the primary metric. However, the results can be misleading, since the recording condition may be the same for all speakers in the development set.
Time Analysis
This section reports on the CPU execution time (single threaded), and the amount of memory used to process a single trial, which includes the time for creating models from the enrolment data and the time needed for processing the test segments. The analysis was performed on an Intel(R) Xeon(R) CPU E5-2670 2.60GHz. The results are shown in Table TABREF27 . We used the time command in Unix to report these results. The user time is the actual CPU time used in executing the process (single thread). The real time is the wall clock time (the elapsed time including time slices used by other processes and the time the process spends blocked). The system time is also the amount of CPU time spent in the kernel within the process. We have also reported the memory allocated for each stage of execution. The most computationally intensive stage is the extraction of i-vectors (both MFCC- and PLP-based i-vectors), which also depends on the duration of the segments. For enrolment, we have reported the time required to extract a model from a segment with a duration of 140 seconds and speech duration of 60 seconds. The time and memory required for front-end processing are negligible compared to the i-vector extraction stage, since they only include matrix operations. The time required for our SAD is also reported which increases linearly with the duration of segment.
Conclusions and Perspectives
We have presented the Intelligent Voice speaker recognition system used for the NIST 2016 speaker recognition evaluation. Our system is based on a score fusion of MFCC- and PLP-based i-vector/PLDA systems. We have described the main components of the system including, acoustic feature extraction, speech activity detection, i-vector extraction as front-end processing, and language normalization, short-duration compensation, channel compensation and domain adaptation as back-end processing. For our future work, we intend to use the ALISP segmentation technique BIBREF22 in order to extract meaningful acoustic units so as to train supervised GMM or DNN models. | No |
30803eefd7cdeb721f47c9ca72a5b1d750b8e03b | 30803eefd7cdeb721f47c9ca72a5b1d750b8e03b_0 | Q: How well does their system perform on the development set of SRE?
Text: Introduction
Compared to previous years, the 2016 NIST speaker recognition evaluation (SRE) marked a major shift from English towards Austronesian and Chinese languages. The task like previous years is to perform speaker detection with the focus on telephone speech data recorded over a variety of handset types. The main challenges introduced in this evaluation are duration and language variability. The potential variation of languages addressed in this evaluation, recording environment, and variability of test segments duration influenced the design of our system. Our goal was to utilize recent advances in language normalization, domain adaptation, speech activity detection and session compensation techniques to mitigate the adverse bias introduced in this year's evaluation.
Over recent years, the i-vector representation of speech segments has been widely used by state-of-the-art speaker recognition systems BIBREF0 . The speaker recognition technology based on i-vectors currently dominates the research field due to its performance, low computational cost and the compatibility of i-vectors with machine learning techniques. This dominance is reflected by the recent NIST i-vector machine learning challenge BIBREF1 which was designed to find the most promising algorithmic approaches to speaker recognition specifically on the basis of i-vectors BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . The outstanding ability of DNN for frame alignment which has achieved remarkable performance in text-independent speaker recognition for English data BIBREF6 , BIBREF7 , failed to provide even comparable recognition performance to the traditional GMM. Therefore, we concentrated on the cepstral based GMM/i-vector system.
We outline in this paper the Intelligent Voice system, techniques and results obtained on the SRE 2016 development set that will mirror the evaluation condition as well as the timing report. Section SECREF2 describes the data used for the system training. The front-end and back-end processing of the system are presented in Sections SECREF3 and SECREF4 respectively. In Section SECREF5 , we describe experimental evaluation of the system on the SRE 2016 development set. Finally, we present a timing analysis of the system in Section SECREF6 .
Training Condition
The fixed training condition is used to build our speaker recognition system. Only conversational telephone speech data from datasets released through the linguistic data consortium (LDC) have been used, including NIST SRE 2004-2010 and the Switchboard corpora (Switchboard Cellular Parts I and II, Switchboard2 Phase I,II and III) for different steps of system training. A more detailed description of the data used in the system training is presented in Table TABREF1 . We have also included the unlabelled set of 2472 telephone calls from both minor (Cebuano and Mandarin) and major (Tagalog and Cantonese) languages provided by NIST in the system training. We will indicate when and how we used this set in the training in the following sections.
Front-End Processing
In this section we will provide a description of the main steps in front-end processing of our speaker recognition system including speech activity detection, acoustic and i-vector feature extraction.
Speech Activity Detection
The first stage of any speaker recognition system is to detect the speech content in an audio signal. An accurate speech activity detector (SAD) can improve the speaker recognition performance. Several techniques have been proposed for SAD, including unsupervised methods based on a thresholding signal energy, and supervised methods that train a speech/non-speech classifier such as support vector machines (SVM) BIBREF8 and Gaussian mixture models (GMMs) BIBREF9 . Hidden markov models (HMMs) BIBREF10 have also been successful. Recently, it has been shown that DNN systems achieve impressive improvement in performance especially in low signal to noise ratios (SNRs) BIBREF11 . In our work we have utilized a two-class DNN-HMM classifier to perform this task. The DNN-HMM hybrid configuration with cross-entropy as the objective function has been trained with the back-propagation algorithm. The softmax layer produces posterior probabilities for speech and non-speech which were then converted into log-likelihoods. Using 2-state HMMs corresponding to speech and non-speech, frame-wise decisions are made by Viterbi decoding. As input to the network, we fed 40-dimensional filter-bank features along with 7 frames from each side. The network has 6 hidden layers with 512 units each. The architecture of our DNN-HMM SAD is shown in Figure FIGREF3 . Approximately 100 hours of speech data from the Switchboard telephony data with word alignments as ground-truth were used to train our SAD. The DNN training in performed on an NVIDIA TITAN X GPU, using Kaldi software BIBREF12 . Evaluated on 50 hours of telephone speech data from the same database, our DNN-HMM SAD indicated a frame-level miss-classification (speech/non-speech) rate of 5.9% whereas an energy-based SAD did not perform better than 20%.
Acoustic Features
For acoustic features we have experimented with different configurations of cepstral features. We have used 39-dimensional PLP features and 60-dimensional MFCC features (including their first and second order derivatives) as acoustic features. Moreover, our experiments indicated that the combination of these two feature sets performs particularly well in score fusion. Both PLP and MFCC are extracted at 8kHz sample frequency using Kaldi BIBREF12 with 25 and 20 ms frame lengths, respectively, and a 10 ms overlap (other configurations are the same as Kaldi defaults). For each utterance, the features are centered using a short-term (3s window) cepstral mean and variance normalization (ST-CMVN). Finally, we employed our DNN-HMM speech activity detector (SAD) to drop non-speech frames.
i-Vector Features
Since the introduction of i-vectors in BIBREF0 , the speaker recognition community has seen a significant increase in recognition performance. i-Vectors are low-dimensional representations of Baum-Welch statistics obtained with respect to a GMM, referred to as universal background model (UBM), in a single subspace which includes all characteristics of speaker and inter-session variability, named total variability matrix BIBREF0 . We trained on each acoustic feature a full covariance, gender-independent UBM model with 2048 Gaussians followed by a 600-dimensional i-vector extractor to establish our MFCC- and PLP-based i-vector systems. The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor. The open-source Kaldi software has been used for all these processing steps BIBREF12 .
It has been shown that successive acoustic observation vectors tend to be highly correlated. This may be problematic for maximum a posteriori (MAP) estimation of i-vectors. To investigating this issue, scaling the zero and first order Baum-Welch statistics before presenting them to the i-vector extractor has been proposed. It turns out that a scale factor of 0.33 gives a slight edge, resulting in a better decision cost function BIBREF13 . This scaling factor has been performed in training the i-vector extractor as well as in the testing.
Back-End Processing
This section provides the steps performed in back-end processing of our speaker recognition system.
Nearest-neighbor Discriminant Analysis (NDA)
The nearest-neighbor discriminant analysis is a nonparametric discriminant analysis technique which was proposed in BIBREF14 , and recently used in speaker recognition BIBREF15 . The nonparametric within- and between-class scatter matrices INLINEFORM0 and INLINEFORM1 , respectively, are computed based on INLINEFORM2 nearest neighbor sample information. The NDA transform is then formed using eigenvectors of INLINEFORM3 . It has been shown that as the number of nearest neighbors INLINEFORM4 approaches the number of samples in each class, the NDA essentially becomes the LDA projection. Based on the finding in BIBREF15 , NDA outperformed LDA due to the ability in capturing the local structure and boundary information within and across different speakers. We applied a INLINEFORM5 NDA projection matrix computed using the 10 nearest sample information on centered i-vectors. The resulting dimensionality reduced i-vectors are then whitened using both the training data and the unlabelled development set.
Short-Duration Variability Compensation
The enrolment condition of the development set is supposed to provide at least 60 seconds of speech data for each target speaker. Nevertheless, our SAD indicates that the speech content is as low as 26 seconds in some cases. The test segments duration which ranges from 9 to 60 seconds of speech material can result in poor performance for lower duration segments. As indicated in Figure FIGREF8 , more than one third of the test segments have speech duration of less than 20 seconds. We have addressed this issue by proposing a short duration variability compensation method. The proposed method works by first extracting from each audio segment in the unlabelled development set, a partial excerpt of 10 seconds of speech material with random selection of the starting point (Figure FIGREF9 ). Each audio file in the unlabelled development set, with the extracted audio segment will result in two 400-dimensional i-vectors, one with at most 10 seconds of speech material. Considering each pair as one class, we computed a INLINEFORM0 LDA projection matrix to remove directions attributed to duration variability. Moreover, the projected i-vectors are also subjected to a within-class covariance normalization (WCCN) using the same class labels.
Language Normalization
Language-source normalization is an effective technique for reducing language dependency in the state-of-the-art i-vector/PLDA speaker recognition system BIBREF16 . It can be implemented by extending SN-LDA BIBREF17 in order to mitigate variations that separate languages. This can be accomplished by using the language label to identify different sources during training. Language Normalized-LDA (LN-LDA) utilizes a language-normalized within-speaker scatter matrix INLINEFORM0 which is estimated as the variability not captured by the between-speaker scatter matrix, DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the total scatter and normalized between-speaker scatter matrices respectively, and are formulated as follows: DISPLAYFORM0
where INLINEFORM0 is the total number of i-vectors and DISPLAYFORM0
where INLINEFORM0 is the number of languages in the training set, INLINEFORM1 is the number of speakers in language INLINEFORM2 , INLINEFORM3 is the mean of INLINEFORM4 i-vectors from speaker INLINEFORM5 and language INLINEFORM6 and finally INLINEFORM7 is the mean of all i-vectors in language INLINEFORM8 . We applied a INLINEFORM9 SN-LDA projection matrix to reduce the i-vector dimensions down to 300.
PLDA
Probabilistic Linear Discriminant Analysis (PLDA) provides a powerful mechanism to distinguish between-speaker variability, separating sources which characterizes speaker information, from all other sources of undesired variability that characterize distortions. Since i-vectors are assumed to be generated by some generative model, we can break it down into statistically independent speaker- and session-components with Gaussian distributions BIBREF18 , BIBREF19 . Although it has been shown that their distribution follow Student’s INLINEFORM0 rather than Gaussian BIBREF19 distributions, length normalizing the entire set of i-vectors as a pre-processing step can approximately Gaussianize their distributions BIBREF18 and as a result improve the performance of Gaussian PLDA to that of heavy-tailed PLDA BIBREF19 . A standard Gaussian PLDA assumes that an i-vector INLINEFORM1 , is modelled according to DISPLAYFORM0
where, INLINEFORM0 is the mean of i-vectors, the columns of matrix INLINEFORM1 contains the basis for the between-speaker subspace, the latent identity variable INLINEFORM2 denotes the speaker factor that represents the identity of the speaker and the residual INLINEFORM3 which is normally distributed with zero mean and full covariance matrix INLINEFORM4 , represents within-speaker variability.
For each acoustic feature we have trained two PLDA models. The first out-domain PLDA ( INLINEFORM0 , INLINEFORM1 ) is trained using the training set presented in Table TABREF1 , and the second in-domain PLDA ( INLINEFORM2 , INLINEFORM3 ) was trained using the unlabelled development set. Our efforts to cluster the development set (e.g using the out-domain PLDA) was not very successful as it sounds that almost all of them are uttered by different speakers. Therefore, each i-vector was considered to be uttered by one speaker. We also set the number of speaker factors to 200.
Domain Adaptation
Domain adaptation has gained considerable attention with the aim of compensating for cross-speech-source variability of in-domain and out-of-domain data. The framework presented in BIBREF20 for unsupervised adaptation of out-domain PLDA parameters resulted in better performance for in-domain data. Using in-domain and out-domain PLDA trained in Section SECREF14 , we interpolated their parameters as follow: DISPLAYFORM0
We chose INLINEFORM0 for making our submission.
Score Computation and Normalization
For the one-segment enrolment condition, the speaker model is the length normalized i-vector of that segment, however, for the three-segment enrolment condition, we simply used a length-normalized mean vector of the length-normalizated i-vectors as the speaker model. Each speaker model is tested against each test segment as in the trial list. For each two trial i-vectors INLINEFORM0 and INLINEFORM1 , the PLDA score is computed as DISPLAYFORM0
in which DISPLAYFORM0 DISPLAYFORM1
and INLINEFORM0 and INLINEFORM1 . It has been shown and proved in our experiments that score normalization can have a great impact on the performance of the recognition system. We used the symmetric s-norm proposed in BIBREF19 which normalizes the score INLINEFORM2 of the pair INLINEFORM3 using the formula DISPLAYFORM0
where the means INLINEFORM0 and standard deviations INLINEFORM1 are computed by matching INLINEFORM2 and INLINEFORM3 against the unlabelled set as the impostor speakers, respectively.
Quality Measure Function
It has been shown that there is a dependency between the value of the INLINEFORM0 threshold and the duration of both enrolment and test segments. Applying the quality measure function (QMF) BIBREF3 enabled us to compensate for the shift in the INLINEFORM1 threshold due to the differences in speech duration. We conducted some experiments to estimate the dependency between the INLINEFORM2 threshold shift on the duration of test segment and used the following QMF for PLDA verfication scores: DISPLAYFORM0
where INLINEFORM0 is the duration of the test segment in seconds.
Calibration
In the literature, the performance of speaker recognition is usually reported in terms of calibrated-insensitive equal error rate (EER) or the minimum decision cost function ( INLINEFORM0 ). However, in real applications of speaker recognition there is a need to present recognition results in terms of calibrated log-likelihood-ratios. We have utilized the BOSARIS Toolkit BIBREF21 for calibration of scores. INLINEFORM1 provides an ideal reference value for judging calibration. If INLINEFORM2 is minimized, then the system can be said to be well calibrated.
The choice of target probability ( INLINEFORM0 ) had a great impact on the performance of the calibration. However, we set INLINEFORM1 for our primary submission which performed the best on the development set. For our secondary submission INLINEFORM2 was used.
Results and Discussion
In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 . The first part of the table indicates the result obtained by the primary system. As can be seen, the fusion of MFCC and PLP (a simple sum of both MFCC and PLP scores) resulted in a relative improvement of almost 10%, as compared to MFCC alone, in terms of both INLINEFORM0 and INLINEFORM1 . In order to quantify the contribution of the different system components we have defined different scenarios. In scenario A, we have analysed the effect of using LDA instead of NDA. As can be seen from the results, LDA outperforms NDA in the case of PLP, however, in fusion we can see that NDA resulted in better performance in terms of the primary metric. In scenario B, we analysed the effect of using the short-duration compensation technique proposed in Section SECREF7 . Results indicate superior performance using this technique. In scenario C, we investigated the effects of language normalization on the performance of the system. If we replace LN-LDA with simple LDA, we can see performance degradation in MFCC as well as fusion, however, PLP seems not to be adversely affected. The effect of using QMF is also investigated in scenario D. Finally in scenario E, we can see the major improvement obtained through the use of the domain adaptation technique explained in Section SECREF16 . For our secondary submission, we incorporated a disjoint portion of the labelled development set (10 out of 20 speakers) in either LN-LDA and in-domain PLDA training. We evaluated the system on almost 6k out of 24k trials from the other portion to avoid any over-fitting, particularly important for the domain adaptation technique. This resulted in a relative improvement of 11% compared to the primary system in terms of the primary metric. However, the results can be misleading, since the recording condition may be the same for all speakers in the development set.
Time Analysis
This section reports on the CPU execution time (single threaded), and the amount of memory used to process a single trial, which includes the time for creating models from the enrolment data and the time needed for processing the test segments. The analysis was performed on an Intel(R) Xeon(R) CPU E5-2670 2.60GHz. The results are shown in Table TABREF27 . We used the time command in Unix to report these results. The user time is the actual CPU time used in executing the process (single thread). The real time is the wall clock time (the elapsed time including time slices used by other processes and the time the process spends blocked). The system time is also the amount of CPU time spent in the kernel within the process. We have also reported the memory allocated for each stage of execution. The most computationally intensive stage is the extraction of i-vectors (both MFCC- and PLP-based i-vectors), which also depends on the duration of the segments. For enrolment, we have reported the time required to extract a model from a segment with a duration of 140 seconds and speech duration of 60 seconds. The time and memory required for front-end processing are negligible compared to the i-vector extraction stage, since they only include matrix operations. The time required for our SAD is also reported which increases linearly with the duration of segment.
Conclusions and Perspectives
We have presented the Intelligent Voice speaker recognition system used for the NIST 2016 speaker recognition evaluation. Our system is based on a score fusion of MFCC- and PLP-based i-vector/PLDA systems. We have described the main components of the system including, acoustic feature extraction, speech activity detection, i-vector extraction as front-end processing, and language normalization, short-duration compensation, channel compensation and domain adaptation as back-end processing. For our future work, we intend to use the ALISP segmentation technique BIBREF22 in order to extract meaningful acoustic units so as to train supervised GMM or DNN models. | EER 16.04, Cmindet 0.6012, Cdet 0.6107 |
442f8da2c988530e62e4d1d52c6ec913e3ec5bf1 | 442f8da2c988530e62e4d1d52c6ec913e3ec5bf1_0 | Q: Which are the novel languages on which SRE placed emphasis on?
Text: Introduction
Compared to previous years, the 2016 NIST speaker recognition evaluation (SRE) marked a major shift from English towards Austronesian and Chinese languages. The task like previous years is to perform speaker detection with the focus on telephone speech data recorded over a variety of handset types. The main challenges introduced in this evaluation are duration and language variability. The potential variation of languages addressed in this evaluation, recording environment, and variability of test segments duration influenced the design of our system. Our goal was to utilize recent advances in language normalization, domain adaptation, speech activity detection and session compensation techniques to mitigate the adverse bias introduced in this year's evaluation.
Over recent years, the i-vector representation of speech segments has been widely used by state-of-the-art speaker recognition systems BIBREF0 . The speaker recognition technology based on i-vectors currently dominates the research field due to its performance, low computational cost and the compatibility of i-vectors with machine learning techniques. This dominance is reflected by the recent NIST i-vector machine learning challenge BIBREF1 which was designed to find the most promising algorithmic approaches to speaker recognition specifically on the basis of i-vectors BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . The outstanding ability of DNN for frame alignment which has achieved remarkable performance in text-independent speaker recognition for English data BIBREF6 , BIBREF7 , failed to provide even comparable recognition performance to the traditional GMM. Therefore, we concentrated on the cepstral based GMM/i-vector system.
We outline in this paper the Intelligent Voice system, techniques and results obtained on the SRE 2016 development set that will mirror the evaluation condition as well as the timing report. Section SECREF2 describes the data used for the system training. The front-end and back-end processing of the system are presented in Sections SECREF3 and SECREF4 respectively. In Section SECREF5 , we describe experimental evaluation of the system on the SRE 2016 development set. Finally, we present a timing analysis of the system in Section SECREF6 .
Training Condition
The fixed training condition is used to build our speaker recognition system. Only conversational telephone speech data from datasets released through the linguistic data consortium (LDC) have been used, including NIST SRE 2004-2010 and the Switchboard corpora (Switchboard Cellular Parts I and II, Switchboard2 Phase I,II and III) for different steps of system training. A more detailed description of the data used in the system training is presented in Table TABREF1 . We have also included the unlabelled set of 2472 telephone calls from both minor (Cebuano and Mandarin) and major (Tagalog and Cantonese) languages provided by NIST in the system training. We will indicate when and how we used this set in the training in the following sections.
Front-End Processing
In this section we will provide a description of the main steps in front-end processing of our speaker recognition system including speech activity detection, acoustic and i-vector feature extraction.
Speech Activity Detection
The first stage of any speaker recognition system is to detect the speech content in an audio signal. An accurate speech activity detector (SAD) can improve the speaker recognition performance. Several techniques have been proposed for SAD, including unsupervised methods based on a thresholding signal energy, and supervised methods that train a speech/non-speech classifier such as support vector machines (SVM) BIBREF8 and Gaussian mixture models (GMMs) BIBREF9 . Hidden markov models (HMMs) BIBREF10 have also been successful. Recently, it has been shown that DNN systems achieve impressive improvement in performance especially in low signal to noise ratios (SNRs) BIBREF11 . In our work we have utilized a two-class DNN-HMM classifier to perform this task. The DNN-HMM hybrid configuration with cross-entropy as the objective function has been trained with the back-propagation algorithm. The softmax layer produces posterior probabilities for speech and non-speech which were then converted into log-likelihoods. Using 2-state HMMs corresponding to speech and non-speech, frame-wise decisions are made by Viterbi decoding. As input to the network, we fed 40-dimensional filter-bank features along with 7 frames from each side. The network has 6 hidden layers with 512 units each. The architecture of our DNN-HMM SAD is shown in Figure FIGREF3 . Approximately 100 hours of speech data from the Switchboard telephony data with word alignments as ground-truth were used to train our SAD. The DNN training in performed on an NVIDIA TITAN X GPU, using Kaldi software BIBREF12 . Evaluated on 50 hours of telephone speech data from the same database, our DNN-HMM SAD indicated a frame-level miss-classification (speech/non-speech) rate of 5.9% whereas an energy-based SAD did not perform better than 20%.
Acoustic Features
For acoustic features we have experimented with different configurations of cepstral features. We have used 39-dimensional PLP features and 60-dimensional MFCC features (including their first and second order derivatives) as acoustic features. Moreover, our experiments indicated that the combination of these two feature sets performs particularly well in score fusion. Both PLP and MFCC are extracted at 8kHz sample frequency using Kaldi BIBREF12 with 25 and 20 ms frame lengths, respectively, and a 10 ms overlap (other configurations are the same as Kaldi defaults). For each utterance, the features are centered using a short-term (3s window) cepstral mean and variance normalization (ST-CMVN). Finally, we employed our DNN-HMM speech activity detector (SAD) to drop non-speech frames.
i-Vector Features
Since the introduction of i-vectors in BIBREF0 , the speaker recognition community has seen a significant increase in recognition performance. i-Vectors are low-dimensional representations of Baum-Welch statistics obtained with respect to a GMM, referred to as universal background model (UBM), in a single subspace which includes all characteristics of speaker and inter-session variability, named total variability matrix BIBREF0 . We trained on each acoustic feature a full covariance, gender-independent UBM model with 2048 Gaussians followed by a 600-dimensional i-vector extractor to establish our MFCC- and PLP-based i-vector systems. The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor. The open-source Kaldi software has been used for all these processing steps BIBREF12 .
It has been shown that successive acoustic observation vectors tend to be highly correlated. This may be problematic for maximum a posteriori (MAP) estimation of i-vectors. To investigating this issue, scaling the zero and first order Baum-Welch statistics before presenting them to the i-vector extractor has been proposed. It turns out that a scale factor of 0.33 gives a slight edge, resulting in a better decision cost function BIBREF13 . This scaling factor has been performed in training the i-vector extractor as well as in the testing.
Back-End Processing
This section provides the steps performed in back-end processing of our speaker recognition system.
Nearest-neighbor Discriminant Analysis (NDA)
The nearest-neighbor discriminant analysis is a nonparametric discriminant analysis technique which was proposed in BIBREF14 , and recently used in speaker recognition BIBREF15 . The nonparametric within- and between-class scatter matrices INLINEFORM0 and INLINEFORM1 , respectively, are computed based on INLINEFORM2 nearest neighbor sample information. The NDA transform is then formed using eigenvectors of INLINEFORM3 . It has been shown that as the number of nearest neighbors INLINEFORM4 approaches the number of samples in each class, the NDA essentially becomes the LDA projection. Based on the finding in BIBREF15 , NDA outperformed LDA due to the ability in capturing the local structure and boundary information within and across different speakers. We applied a INLINEFORM5 NDA projection matrix computed using the 10 nearest sample information on centered i-vectors. The resulting dimensionality reduced i-vectors are then whitened using both the training data and the unlabelled development set.
Short-Duration Variability Compensation
The enrolment condition of the development set is supposed to provide at least 60 seconds of speech data for each target speaker. Nevertheless, our SAD indicates that the speech content is as low as 26 seconds in some cases. The test segments duration which ranges from 9 to 60 seconds of speech material can result in poor performance for lower duration segments. As indicated in Figure FIGREF8 , more than one third of the test segments have speech duration of less than 20 seconds. We have addressed this issue by proposing a short duration variability compensation method. The proposed method works by first extracting from each audio segment in the unlabelled development set, a partial excerpt of 10 seconds of speech material with random selection of the starting point (Figure FIGREF9 ). Each audio file in the unlabelled development set, with the extracted audio segment will result in two 400-dimensional i-vectors, one with at most 10 seconds of speech material. Considering each pair as one class, we computed a INLINEFORM0 LDA projection matrix to remove directions attributed to duration variability. Moreover, the projected i-vectors are also subjected to a within-class covariance normalization (WCCN) using the same class labels.
Language Normalization
Language-source normalization is an effective technique for reducing language dependency in the state-of-the-art i-vector/PLDA speaker recognition system BIBREF16 . It can be implemented by extending SN-LDA BIBREF17 in order to mitigate variations that separate languages. This can be accomplished by using the language label to identify different sources during training. Language Normalized-LDA (LN-LDA) utilizes a language-normalized within-speaker scatter matrix INLINEFORM0 which is estimated as the variability not captured by the between-speaker scatter matrix, DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the total scatter and normalized between-speaker scatter matrices respectively, and are formulated as follows: DISPLAYFORM0
where INLINEFORM0 is the total number of i-vectors and DISPLAYFORM0
where INLINEFORM0 is the number of languages in the training set, INLINEFORM1 is the number of speakers in language INLINEFORM2 , INLINEFORM3 is the mean of INLINEFORM4 i-vectors from speaker INLINEFORM5 and language INLINEFORM6 and finally INLINEFORM7 is the mean of all i-vectors in language INLINEFORM8 . We applied a INLINEFORM9 SN-LDA projection matrix to reduce the i-vector dimensions down to 300.
PLDA
Probabilistic Linear Discriminant Analysis (PLDA) provides a powerful mechanism to distinguish between-speaker variability, separating sources which characterizes speaker information, from all other sources of undesired variability that characterize distortions. Since i-vectors are assumed to be generated by some generative model, we can break it down into statistically independent speaker- and session-components with Gaussian distributions BIBREF18 , BIBREF19 . Although it has been shown that their distribution follow Student’s INLINEFORM0 rather than Gaussian BIBREF19 distributions, length normalizing the entire set of i-vectors as a pre-processing step can approximately Gaussianize their distributions BIBREF18 and as a result improve the performance of Gaussian PLDA to that of heavy-tailed PLDA BIBREF19 . A standard Gaussian PLDA assumes that an i-vector INLINEFORM1 , is modelled according to DISPLAYFORM0
where, INLINEFORM0 is the mean of i-vectors, the columns of matrix INLINEFORM1 contains the basis for the between-speaker subspace, the latent identity variable INLINEFORM2 denotes the speaker factor that represents the identity of the speaker and the residual INLINEFORM3 which is normally distributed with zero mean and full covariance matrix INLINEFORM4 , represents within-speaker variability.
For each acoustic feature we have trained two PLDA models. The first out-domain PLDA ( INLINEFORM0 , INLINEFORM1 ) is trained using the training set presented in Table TABREF1 , and the second in-domain PLDA ( INLINEFORM2 , INLINEFORM3 ) was trained using the unlabelled development set. Our efforts to cluster the development set (e.g using the out-domain PLDA) was not very successful as it sounds that almost all of them are uttered by different speakers. Therefore, each i-vector was considered to be uttered by one speaker. We also set the number of speaker factors to 200.
Domain Adaptation
Domain adaptation has gained considerable attention with the aim of compensating for cross-speech-source variability of in-domain and out-of-domain data. The framework presented in BIBREF20 for unsupervised adaptation of out-domain PLDA parameters resulted in better performance for in-domain data. Using in-domain and out-domain PLDA trained in Section SECREF14 , we interpolated their parameters as follow: DISPLAYFORM0
We chose INLINEFORM0 for making our submission.
Score Computation and Normalization
For the one-segment enrolment condition, the speaker model is the length normalized i-vector of that segment, however, for the three-segment enrolment condition, we simply used a length-normalized mean vector of the length-normalizated i-vectors as the speaker model. Each speaker model is tested against each test segment as in the trial list. For each two trial i-vectors INLINEFORM0 and INLINEFORM1 , the PLDA score is computed as DISPLAYFORM0
in which DISPLAYFORM0 DISPLAYFORM1
and INLINEFORM0 and INLINEFORM1 . It has been shown and proved in our experiments that score normalization can have a great impact on the performance of the recognition system. We used the symmetric s-norm proposed in BIBREF19 which normalizes the score INLINEFORM2 of the pair INLINEFORM3 using the formula DISPLAYFORM0
where the means INLINEFORM0 and standard deviations INLINEFORM1 are computed by matching INLINEFORM2 and INLINEFORM3 against the unlabelled set as the impostor speakers, respectively.
Quality Measure Function
It has been shown that there is a dependency between the value of the INLINEFORM0 threshold and the duration of both enrolment and test segments. Applying the quality measure function (QMF) BIBREF3 enabled us to compensate for the shift in the INLINEFORM1 threshold due to the differences in speech duration. We conducted some experiments to estimate the dependency between the INLINEFORM2 threshold shift on the duration of test segment and used the following QMF for PLDA verfication scores: DISPLAYFORM0
where INLINEFORM0 is the duration of the test segment in seconds.
Calibration
In the literature, the performance of speaker recognition is usually reported in terms of calibrated-insensitive equal error rate (EER) or the minimum decision cost function ( INLINEFORM0 ). However, in real applications of speaker recognition there is a need to present recognition results in terms of calibrated log-likelihood-ratios. We have utilized the BOSARIS Toolkit BIBREF21 for calibration of scores. INLINEFORM1 provides an ideal reference value for judging calibration. If INLINEFORM2 is minimized, then the system can be said to be well calibrated.
The choice of target probability ( INLINEFORM0 ) had a great impact on the performance of the calibration. However, we set INLINEFORM1 for our primary submission which performed the best on the development set. For our secondary submission INLINEFORM2 was used.
Results and Discussion
In this section we present the results obtained on the protocol provided by NIST on the development set which is supposed to mirror that of evaluation set. The results are shown in Table TABREF26 . The first part of the table indicates the result obtained by the primary system. As can be seen, the fusion of MFCC and PLP (a simple sum of both MFCC and PLP scores) resulted in a relative improvement of almost 10%, as compared to MFCC alone, in terms of both INLINEFORM0 and INLINEFORM1 . In order to quantify the contribution of the different system components we have defined different scenarios. In scenario A, we have analysed the effect of using LDA instead of NDA. As can be seen from the results, LDA outperforms NDA in the case of PLP, however, in fusion we can see that NDA resulted in better performance in terms of the primary metric. In scenario B, we analysed the effect of using the short-duration compensation technique proposed in Section SECREF7 . Results indicate superior performance using this technique. In scenario C, we investigated the effects of language normalization on the performance of the system. If we replace LN-LDA with simple LDA, we can see performance degradation in MFCC as well as fusion, however, PLP seems not to be adversely affected. The effect of using QMF is also investigated in scenario D. Finally in scenario E, we can see the major improvement obtained through the use of the domain adaptation technique explained in Section SECREF16 . For our secondary submission, we incorporated a disjoint portion of the labelled development set (10 out of 20 speakers) in either LN-LDA and in-domain PLDA training. We evaluated the system on almost 6k out of 24k trials from the other portion to avoid any over-fitting, particularly important for the domain adaptation technique. This resulted in a relative improvement of 11% compared to the primary system in terms of the primary metric. However, the results can be misleading, since the recording condition may be the same for all speakers in the development set.
Time Analysis
This section reports on the CPU execution time (single threaded), and the amount of memory used to process a single trial, which includes the time for creating models from the enrolment data and the time needed for processing the test segments. The analysis was performed on an Intel(R) Xeon(R) CPU E5-2670 2.60GHz. The results are shown in Table TABREF27 . We used the time command in Unix to report these results. The user time is the actual CPU time used in executing the process (single thread). The real time is the wall clock time (the elapsed time including time slices used by other processes and the time the process spends blocked). The system time is also the amount of CPU time spent in the kernel within the process. We have also reported the memory allocated for each stage of execution. The most computationally intensive stage is the extraction of i-vectors (both MFCC- and PLP-based i-vectors), which also depends on the duration of the segments. For enrolment, we have reported the time required to extract a model from a segment with a duration of 140 seconds and speech duration of 60 seconds. The time and memory required for front-end processing are negligible compared to the i-vector extraction stage, since they only include matrix operations. The time required for our SAD is also reported which increases linearly with the duration of segment.
Conclusions and Perspectives
We have presented the Intelligent Voice speaker recognition system used for the NIST 2016 speaker recognition evaluation. Our system is based on a score fusion of MFCC- and PLP-based i-vector/PLDA systems. We have described the main components of the system including, acoustic feature extraction, speech activity detection, i-vector extraction as front-end processing, and language normalization, short-duration compensation, channel compensation and domain adaptation as back-end processing. For our future work, we intend to use the ALISP segmentation technique BIBREF22 in order to extract meaningful acoustic units so as to train supervised GMM or DNN models. | Cebuano and Mandarin, Tagalog and Cantonese |
ae60079da9d3d039965368acbb23c6283bc3da94 | ae60079da9d3d039965368acbb23c6283bc3da94_0 | Q: Does this approach perform better than context-based word embeddings?
Text: Introduction
Word embedding is a very active area of research. It consists of using a text corpus to characterize and embed words into rich high-dimensional vector spaces. By mining a text corpus, it is possible to embed words in a continuous space where semantically similar words are embedded close together. By encoding words into vectors, it is possible to represent semantic properties of these words in a way that is more expressive and useful for tasks of natural language processing. Word embeddings have been effectively used for sentiment analysis, machine translation, and other and other language-related tasks BIBREF0 , BIBREF1 .
The basic idea behind all methods of word embedding is the distributional hypothesis: “A word is characterized by the company it keeps" BIBREF2 , BIBREF3 . Based on this idea, count-based methods such as LSA BIBREF4 , and predictive methods that use neural networks to learn the embedding vectors were developed, and used in research with success BIBREF5 , BIBREF6 .
In this work, we propose a new approach to learn word embeddings that is based on the etymological roots of words. Our approach relies on the fact that a shared etymological root between two words expresses a deliberate semantic similarity between these two words; by leveraging information on these semantic similarities, we derive the embedding vectors of words. This is akin to extending the distributional hypothesis to consider etymological context as well as textual context: words that appear in similar etymological contexts must also express similar concepts.
Based on this hypothesis, our approach consists of building a graph that captures these etymological relationships, and reducing the dimensionality of its adjacency matrix to learn word embeddings. Our approach can be applied to vocabularies of any language. Since our work relies on etymology, it requires a dependable way to obtain the etymological roots of words. This is why our approach is particularly well suited for the Chinese language, and other languages that borrow some vocabularies from Chinese. Chinese characters are representative ideograms, so that we can consider each characters as the etymological information which are available and easily accessible. We note that our approach can be also applied to other languages with known etymological roots of words, for example, English or Spanish with Latin root of words.
To verify the word embeddings learned by our model we use the task of synonym discovery, whereby we analyze if it is possible to identify a pair of words as synonyms only through their embedding vectors. Synonym discovery is a common task in research; and it has been used before to test word embedding schemes BIBREF0 . We compare the performance of our Chinese word embedding vectors in the task of synonym discovery against another set of embedding vectors that was constructed with a co-occurrence model BIBREF1 . We also investigate the performance of synonym discovery with the Sino-Korean word embeddings by our method. Our test results shows that our approach out-performs the previous model.
Our approach can be applied to vocabularies of any language. Since our work relies on etymology, it requires a dependable way to obtain the etymological roots of words. In languages with primarily phonetic writing systems, inferring the etymological roots of words is a significant challenge that requires intellectual work to trace words back to their ancestors. This is perhaps the reason that not much research has been made in the data mining community that is based on etymology. That stands in contrast to languages with logographic writing systems, where a word carries morphological information in its writing. This makes the task of etymology extraction much simpler. This is why our approach is particularly well suited for the Chinese language, and the subset of the Korean vocabulary that is comprised by Sino-Korean words (i.e. Korean words that have been borrowed from Chinese).
Written Chinese is comprised by a large set of Hanzi, or characters. Generally, one character represents one syllable of spoken Chinese; and it may represent a monosylabic word, or be part of a polysyllabic word. The characters themselves can be composed to form new, more complex, characters. Chinese writing has also been adopted in other languages such as Korean, Japanese and formerly also Vietnamese. In this work, we use each character as an etymological root that forms part of a word (which is either mono- or polysyllabic); and we study Chinese vocabulary in Korean and in the Chinese language.
Related work
There exists limited research on etymological networks in the English language. Particularly BIBREF7 , and BIBREF8 use an etymological network-based approach to study movie scripts and reviews in English.
When it comes to work that studies the Chinese writing system, a popular topic is to study how radicals combine to form more complex characters BIBREF9 . Some studies have created networks based on word co-occurrence BIBREF10 . We found only one study that creates a network based on how characters mix to form words BIBREF11 .
The task of synonym discovery in Chinese vocabulary has been tackled in previous work BIBREF12 , BIBREF13 . These studies use a large corpus from the Chinese Wikipedia, and identify synonyms by building a graph where words are linked if their Wikipedia articles link to each other. These studies do not report their performance in general, instead reporting some identified synonym pairs.
In another study, BIBREF14 defined an etymological graph-based framework from Sino-Korean data, and used it in a supervised classification scheme to find pairs of Chinese characters (e.g. etymological roots) that were synonyms. It showed that the etymological graph approach can be effectively used to extract knowledge from a complex etymological network.
Word embedding was defined originally in BIBREF15 , where the authors use a neural network-based approach to generate a language model of which word embeddings are a byproduct. Since then, numerous studies have been written where both neural networks and count-based models have been used to produce word embeddings BIBREF5 , BIBREF6 . Aligned embeddings have also been used for machine translation, particularly BIBREF1 attempts translation between English and the Chinese language.
To the best of our knowledge, there are no papers that explore any data mining task based on etymology in either languages with phonetic alphabets or with logographic alphabets.
Building the etymological graph
An etymological graph is a bipartite network with two sets of nodes: one that represents the roots of the words in a language, while the other set represents the words themselves. In an etymological graph, two nodes are connected if one node represents an etymological root of the word represented by the other, as shown in Figure .
To build an etymological graph, one may start from a list of words annotated with their etymological roots. By iterating over the list, and iterating over the roots of each word; it is possible to add nodes and edges to the graph in order. This procedure is expressed in algorithm .
As part of our research, we built two graphs using data collected by crawling an online dictionary for the set of Sino-Korean vocabulary; and online Chinese dataset for Chinese vocabulary. Some statistics about these graphs are shown on table . It is interesting to note that the distributions over word length in Chinese is different than in Korean. This is, perhaps, due to the differences in the ways Chinese loan-words are used in the Korean language, and the ways Chinese uses its own words. These differences should not affect the outcome of our model, because they do not affect the construction of the graph.
Learning word embeddings
To obtain the word embeddings from the graphs, truncated Singular Value Decomposition (SVD) was applied to their biadjacency matrices BIBREF16 . We use SVD inspired by the techniques of LSA BIBREF4 , where it is possible to map words and documents to “hidden" semantic characteristics.
The bi adjacency matrix $A$ of a bipartite graph is a matrix of size $n \times m$ where each column represents a node from one bipartite set, and each row represents a node from the other bipartite set. In the case of etymological graphs, each row represents a root node, while each column represents a word node; therefore the matrix $A$ has dimension $\#roots \times \#words$ .
By applying SVD, we attempt to approximate the biadjacency matrix $ A $ as the product of three matrices $ U \Sigma V^*$ , which is the closest $k$ -dimension approximation of $A$ . In this operation, $\Sigma $ is a diagonal matrix with the $k$ largest singular values in the diagonal, and the matrices $U$ and $V^*$ are matrices of size $\#roots \times k$ and $k \times \#words$ respectively; where $ U \Sigma V^*$0 is the dimension into which we chose to reduce matrix $ U \Sigma V^*$1 . We use the dimension-reduced column vectors in $ U \Sigma V^*$2 as embeddings for each word in our vocabulary.
Another matrix decomposition technique worth considering for future work is CUR factorization BIBREF17 . We're specially interested in its sparsity-maintaining characteristic; since large matrices such as ours can be managed more easily if they are sparse - and SVD eliminates the sparsity of our source matrices.
Verifying the word embeddings: Synonym discovery
To verify the validity of the embeddings, we selected the task of synonym discovery. To assess whether two words are synonyms, we measure their cosine similarity (e.g. internal product) as proposed in BIBREF18 . We expect synonyms to show similarity score above a threshold, which we decide by studying the distribution of the similarity between random pairs of words. In other words, we obtain the dot product of vectors from random pairs of words, and compare them to the dot product of vectors from pairs of synonyms. As random pairs of words are expected to have little semantic relationship, the dot product of their embedded vectors is expected to be close to 0; while the dot product of vectors representing pairs of synonyms is expected to be far from 0 due to the semantic similarity between pair of synonyms, which should be expressed by their embedding in a vector space.
Experiments
For comparison, we used the dataset of Chinese word embeddings that was released as part of BIBREF1 , which contains embeddings of Chinese words in 50 dimensions. We used this data set on the same task: Synonym discovery by measuring their similarity score as the internal product between vectors.
To obtain the “ground truth" of synonym pairs, we collected pairs of synonyms from online dictionaries for both Chinese and Sino-Korean vocabulary. For Sino-Korean, we generated query with the words from Sino-Korean vocabulary and crawled synonyms by searching data on the web BIBREF19 . In the same way, we crawled synonyms of Chinese vocabulary by searching data on the Chinese Synonyms Thesaurus service BIBREF20 . With this way we collected a total of 38,593 pairs of synonyms, while in Chinese we collected 45,731 pairs.
Performance of synonym discovery task
Our experiments show that we were able to reliably identify pairs of synonyms by comparing the embeddings of pairs of words. Performance was specially good in the Korean language graph, as can be seen in Figure , where we plot distributions of dot product between random pairs and pairs of synonyms. As shown in the figure, up to 70% of all synonyms have a similarity measure that places them outside the range covered by 99% of random pairs of synonyms. Figure helps drive this point by showing the variation of the proportion of synonyms that are placed outside the standard deviation of the distribution of dot products of embeddings of random pairs of words when we vary the dimension of our embeddings. Note how only about 1% of random pairs of words appear outside of this range, and the vast majority of them consistently concentrated around zero.
Our embeddings also proved to perform better than our benchmark dataset. Figure shows the distribution of the similarity measure between pairs of synonyms and random pairs of words in the benchmark dataset. In this sample, almost 32% of synonyms show a similarity score that places them away from zero, while 5% of random pairs of words are placed outside of that range. Table compares performance, and dimensionality in both strategies to learn embeddings.
Computation speed of our model
An interesting feature of word embedding models based on matrix factorizations is that training time can be significantly shorter when compared with the time it may take to train a multi-layered neural network. For dimensions under 500, SVD can run very quickly, but as the dimension rises, the factorization step becomes significantly slower. Our model reaches its best performance at around 2000 dimensions, for which the matrix factorization takes over 5 minutes of computation.
Code for our model was developed in Python 3. Paticularly, we used the NetworkX python package to manage and analyze our graphs, and the SciPy and the NumPy libraries to work with matrices and vectors BIBREF21 , BIBREF22 , BIBREF23 . Our code ran on a Intel Core i7-4790 clocked at 3.60GHz and 16 GB of RAM. table shows the running time of the factorization of both our graphs and different values for the dimension of the matrix decomposition.
These running times stand in contrast with the rather large times it takes to train a neural network model. Nonetheless, given that our embeddings require a higher number of dimensions to be effective, SVD on the dimension that we require has a relatively slow performance of up to 5 minutes.
The code for this paper, as well as the datasets and instructions on how to replicate this work are openly available BIBREF24 .
Conclusion
In this work, we have presented a model to learn word embeddings based on etymology. We have shown that the model can capture semantic information derived from a complex etymological network. Its performance is remarkably good in the task of synonym discovery. We believe it can also perform well in other tasks such as antonym discovery.
A noticeable difference between our word embeddings and existing ones is that ours require a much higher number of dimensions to perform well in synonym discovery. Publicly available datasets with word embeddings provide vectors with 25, 50 and 100 dimensions BIBREF0 ; but our embeddings reach their highest effectiveness at around 2,000 dimensions. This is likely a consequence of our data being very sparse: while words in word co-occurrence models can have an almost limitless set of contexts in which they appear, words in etymological graphs have a small number of etymological roots. All the words in our graphs are formed by 5 characters or less.
The approach covered in this paper also has some particular quirks that stem from the use of historical (i.e. etymological) data. This is because the meaning of words is not static, but rather evolves with time and use. Word embeddings that are learned from co-occurrence models are able to capture the ways in which words are used in target corpus.This is not captured by a top-down model that is based on etymology. Our approach would capture the semantics of words from the word roots, rather than how they are used in the text.
Our model also does not rely on very large text corpora, though instead it requires reliable access to etymological roots of words. Etymological dictionaries already capture some of this data, but languages continue to evolve and words to be coined at an ever faster pace, so techniques of machine learning will have to be used to obtain reliable access to etymological roots in other languages.
We believe that our model can help expand our understanding of word embedding; and also help reevaluate the value of etymology in data mining and machine learning. We are excited to see etymological graphs used in other ways to extract knowledge. We also are especially interested in seeing this model applied to different languages.
Acknowledgment
K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2016M3C4A7952587), the Ministry of Trade, Industry & Energy(MOTIE, Korea) under Industrial Technology Innovation Program(No.10073144), and the Brain Korea 21 Plus Project. | Yes |
83f567489da49966af3dc5df2d9d20232bb8cb1e | 83f567489da49966af3dc5df2d9d20232bb8cb1e_0 | Q: Have the authors tried this approach on other languages?
Text: Introduction
Word embedding is a very active area of research. It consists of using a text corpus to characterize and embed words into rich high-dimensional vector spaces. By mining a text corpus, it is possible to embed words in a continuous space where semantically similar words are embedded close together. By encoding words into vectors, it is possible to represent semantic properties of these words in a way that is more expressive and useful for tasks of natural language processing. Word embeddings have been effectively used for sentiment analysis, machine translation, and other and other language-related tasks BIBREF0 , BIBREF1 .
The basic idea behind all methods of word embedding is the distributional hypothesis: “A word is characterized by the company it keeps" BIBREF2 , BIBREF3 . Based on this idea, count-based methods such as LSA BIBREF4 , and predictive methods that use neural networks to learn the embedding vectors were developed, and used in research with success BIBREF5 , BIBREF6 .
In this work, we propose a new approach to learn word embeddings that is based on the etymological roots of words. Our approach relies on the fact that a shared etymological root between two words expresses a deliberate semantic similarity between these two words; by leveraging information on these semantic similarities, we derive the embedding vectors of words. This is akin to extending the distributional hypothesis to consider etymological context as well as textual context: words that appear in similar etymological contexts must also express similar concepts.
Based on this hypothesis, our approach consists of building a graph that captures these etymological relationships, and reducing the dimensionality of its adjacency matrix to learn word embeddings. Our approach can be applied to vocabularies of any language. Since our work relies on etymology, it requires a dependable way to obtain the etymological roots of words. This is why our approach is particularly well suited for the Chinese language, and other languages that borrow some vocabularies from Chinese. Chinese characters are representative ideograms, so that we can consider each characters as the etymological information which are available and easily accessible. We note that our approach can be also applied to other languages with known etymological roots of words, for example, English or Spanish with Latin root of words.
To verify the word embeddings learned by our model we use the task of synonym discovery, whereby we analyze if it is possible to identify a pair of words as synonyms only through their embedding vectors. Synonym discovery is a common task in research; and it has been used before to test word embedding schemes BIBREF0 . We compare the performance of our Chinese word embedding vectors in the task of synonym discovery against another set of embedding vectors that was constructed with a co-occurrence model BIBREF1 . We also investigate the performance of synonym discovery with the Sino-Korean word embeddings by our method. Our test results shows that our approach out-performs the previous model.
Our approach can be applied to vocabularies of any language. Since our work relies on etymology, it requires a dependable way to obtain the etymological roots of words. In languages with primarily phonetic writing systems, inferring the etymological roots of words is a significant challenge that requires intellectual work to trace words back to their ancestors. This is perhaps the reason that not much research has been made in the data mining community that is based on etymology. That stands in contrast to languages with logographic writing systems, where a word carries morphological information in its writing. This makes the task of etymology extraction much simpler. This is why our approach is particularly well suited for the Chinese language, and the subset of the Korean vocabulary that is comprised by Sino-Korean words (i.e. Korean words that have been borrowed from Chinese).
Written Chinese is comprised by a large set of Hanzi, or characters. Generally, one character represents one syllable of spoken Chinese; and it may represent a monosylabic word, or be part of a polysyllabic word. The characters themselves can be composed to form new, more complex, characters. Chinese writing has also been adopted in other languages such as Korean, Japanese and formerly also Vietnamese. In this work, we use each character as an etymological root that forms part of a word (which is either mono- or polysyllabic); and we study Chinese vocabulary in Korean and in the Chinese language.
Related work
There exists limited research on etymological networks in the English language. Particularly BIBREF7 , and BIBREF8 use an etymological network-based approach to study movie scripts and reviews in English.
When it comes to work that studies the Chinese writing system, a popular topic is to study how radicals combine to form more complex characters BIBREF9 . Some studies have created networks based on word co-occurrence BIBREF10 . We found only one study that creates a network based on how characters mix to form words BIBREF11 .
The task of synonym discovery in Chinese vocabulary has been tackled in previous work BIBREF12 , BIBREF13 . These studies use a large corpus from the Chinese Wikipedia, and identify synonyms by building a graph where words are linked if their Wikipedia articles link to each other. These studies do not report their performance in general, instead reporting some identified synonym pairs.
In another study, BIBREF14 defined an etymological graph-based framework from Sino-Korean data, and used it in a supervised classification scheme to find pairs of Chinese characters (e.g. etymological roots) that were synonyms. It showed that the etymological graph approach can be effectively used to extract knowledge from a complex etymological network.
Word embedding was defined originally in BIBREF15 , where the authors use a neural network-based approach to generate a language model of which word embeddings are a byproduct. Since then, numerous studies have been written where both neural networks and count-based models have been used to produce word embeddings BIBREF5 , BIBREF6 . Aligned embeddings have also been used for machine translation, particularly BIBREF1 attempts translation between English and the Chinese language.
To the best of our knowledge, there are no papers that explore any data mining task based on etymology in either languages with phonetic alphabets or with logographic alphabets.
Building the etymological graph
An etymological graph is a bipartite network with two sets of nodes: one that represents the roots of the words in a language, while the other set represents the words themselves. In an etymological graph, two nodes are connected if one node represents an etymological root of the word represented by the other, as shown in Figure .
To build an etymological graph, one may start from a list of words annotated with their etymological roots. By iterating over the list, and iterating over the roots of each word; it is possible to add nodes and edges to the graph in order. This procedure is expressed in algorithm .
As part of our research, we built two graphs using data collected by crawling an online dictionary for the set of Sino-Korean vocabulary; and online Chinese dataset for Chinese vocabulary. Some statistics about these graphs are shown on table . It is interesting to note that the distributions over word length in Chinese is different than in Korean. This is, perhaps, due to the differences in the ways Chinese loan-words are used in the Korean language, and the ways Chinese uses its own words. These differences should not affect the outcome of our model, because they do not affect the construction of the graph.
Learning word embeddings
To obtain the word embeddings from the graphs, truncated Singular Value Decomposition (SVD) was applied to their biadjacency matrices BIBREF16 . We use SVD inspired by the techniques of LSA BIBREF4 , where it is possible to map words and documents to “hidden" semantic characteristics.
The bi adjacency matrix $A$ of a bipartite graph is a matrix of size $n \times m$ where each column represents a node from one bipartite set, and each row represents a node from the other bipartite set. In the case of etymological graphs, each row represents a root node, while each column represents a word node; therefore the matrix $A$ has dimension $\#roots \times \#words$ .
By applying SVD, we attempt to approximate the biadjacency matrix $ A $ as the product of three matrices $ U \Sigma V^*$ , which is the closest $k$ -dimension approximation of $A$ . In this operation, $\Sigma $ is a diagonal matrix with the $k$ largest singular values in the diagonal, and the matrices $U$ and $V^*$ are matrices of size $\#roots \times k$ and $k \times \#words$ respectively; where $ U \Sigma V^*$0 is the dimension into which we chose to reduce matrix $ U \Sigma V^*$1 . We use the dimension-reduced column vectors in $ U \Sigma V^*$2 as embeddings for each word in our vocabulary.
Another matrix decomposition technique worth considering for future work is CUR factorization BIBREF17 . We're specially interested in its sparsity-maintaining characteristic; since large matrices such as ours can be managed more easily if they are sparse - and SVD eliminates the sparsity of our source matrices.
Verifying the word embeddings: Synonym discovery
To verify the validity of the embeddings, we selected the task of synonym discovery. To assess whether two words are synonyms, we measure their cosine similarity (e.g. internal product) as proposed in BIBREF18 . We expect synonyms to show similarity score above a threshold, which we decide by studying the distribution of the similarity between random pairs of words. In other words, we obtain the dot product of vectors from random pairs of words, and compare them to the dot product of vectors from pairs of synonyms. As random pairs of words are expected to have little semantic relationship, the dot product of their embedded vectors is expected to be close to 0; while the dot product of vectors representing pairs of synonyms is expected to be far from 0 due to the semantic similarity between pair of synonyms, which should be expressed by their embedding in a vector space.
Experiments
For comparison, we used the dataset of Chinese word embeddings that was released as part of BIBREF1 , which contains embeddings of Chinese words in 50 dimensions. We used this data set on the same task: Synonym discovery by measuring their similarity score as the internal product between vectors.
To obtain the “ground truth" of synonym pairs, we collected pairs of synonyms from online dictionaries for both Chinese and Sino-Korean vocabulary. For Sino-Korean, we generated query with the words from Sino-Korean vocabulary and crawled synonyms by searching data on the web BIBREF19 . In the same way, we crawled synonyms of Chinese vocabulary by searching data on the Chinese Synonyms Thesaurus service BIBREF20 . With this way we collected a total of 38,593 pairs of synonyms, while in Chinese we collected 45,731 pairs.
Performance of synonym discovery task
Our experiments show that we were able to reliably identify pairs of synonyms by comparing the embeddings of pairs of words. Performance was specially good in the Korean language graph, as can be seen in Figure , where we plot distributions of dot product between random pairs and pairs of synonyms. As shown in the figure, up to 70% of all synonyms have a similarity measure that places them outside the range covered by 99% of random pairs of synonyms. Figure helps drive this point by showing the variation of the proportion of synonyms that are placed outside the standard deviation of the distribution of dot products of embeddings of random pairs of words when we vary the dimension of our embeddings. Note how only about 1% of random pairs of words appear outside of this range, and the vast majority of them consistently concentrated around zero.
Our embeddings also proved to perform better than our benchmark dataset. Figure shows the distribution of the similarity measure between pairs of synonyms and random pairs of words in the benchmark dataset. In this sample, almost 32% of synonyms show a similarity score that places them away from zero, while 5% of random pairs of words are placed outside of that range. Table compares performance, and dimensionality in both strategies to learn embeddings.
Computation speed of our model
An interesting feature of word embedding models based on matrix factorizations is that training time can be significantly shorter when compared with the time it may take to train a multi-layered neural network. For dimensions under 500, SVD can run very quickly, but as the dimension rises, the factorization step becomes significantly slower. Our model reaches its best performance at around 2000 dimensions, for which the matrix factorization takes over 5 minutes of computation.
Code for our model was developed in Python 3. Paticularly, we used the NetworkX python package to manage and analyze our graphs, and the SciPy and the NumPy libraries to work with matrices and vectors BIBREF21 , BIBREF22 , BIBREF23 . Our code ran on a Intel Core i7-4790 clocked at 3.60GHz and 16 GB of RAM. table shows the running time of the factorization of both our graphs and different values for the dimension of the matrix decomposition.
These running times stand in contrast with the rather large times it takes to train a neural network model. Nonetheless, given that our embeddings require a higher number of dimensions to be effective, SVD on the dimension that we require has a relatively slow performance of up to 5 minutes.
The code for this paper, as well as the datasets and instructions on how to replicate this work are openly available BIBREF24 .
Conclusion
In this work, we have presented a model to learn word embeddings based on etymology. We have shown that the model can capture semantic information derived from a complex etymological network. Its performance is remarkably good in the task of synonym discovery. We believe it can also perform well in other tasks such as antonym discovery.
A noticeable difference between our word embeddings and existing ones is that ours require a much higher number of dimensions to perform well in synonym discovery. Publicly available datasets with word embeddings provide vectors with 25, 50 and 100 dimensions BIBREF0 ; but our embeddings reach their highest effectiveness at around 2,000 dimensions. This is likely a consequence of our data being very sparse: while words in word co-occurrence models can have an almost limitless set of contexts in which they appear, words in etymological graphs have a small number of etymological roots. All the words in our graphs are formed by 5 characters or less.
The approach covered in this paper also has some particular quirks that stem from the use of historical (i.e. etymological) data. This is because the meaning of words is not static, but rather evolves with time and use. Word embeddings that are learned from co-occurrence models are able to capture the ways in which words are used in target corpus.This is not captured by a top-down model that is based on etymology. Our approach would capture the semantics of words from the word roots, rather than how they are used in the text.
Our model also does not rely on very large text corpora, though instead it requires reliable access to etymological roots of words. Etymological dictionaries already capture some of this data, but languages continue to evolve and words to be coined at an ever faster pace, so techniques of machine learning will have to be used to obtain reliable access to etymological roots in other languages.
We believe that our model can help expand our understanding of word embedding; and also help reevaluate the value of etymology in data mining and machine learning. We are excited to see etymological graphs used in other ways to extract knowledge. We also are especially interested in seeing this model applied to different languages.
Acknowledgment
K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2016M3C4A7952587), the Ministry of Trade, Industry & Energy(MOTIE, Korea) under Industrial Technology Innovation Program(No.10073144), and the Brain Korea 21 Plus Project. | No |
ff0f77392abc905fe76e0b8c28a76dfb0372a0ec | ff0f77392abc905fe76e0b8c28a76dfb0372a0ec_0 | Q: What features did they train on?
Text: Introduction
Word embeddings are most effective when they learn from both unstructured text and a graph of general knowledge BIBREF0 . ConceptNet 5 BIBREF1 is an open-data knowledge graph that is well suited for this purpose. It is accompanied by a pre-built word embedding model known as ConceptNet Numberbatch, which combines skip-gram embeddings learned from unstructured text with the relational knowledge in ConceptNet.
A straightforward application of the ConceptNet Numberbatch embeddings took first place in SemEval 2017 task 2, on semantic word similarity. For SemEval 2018, we built a system with these embeddings as a major component for a slightly more complex task.
The Capturing Discriminative Attributes task BIBREF2 emphasizes the ability of a semantic model to recognize relevant differences between terms, not just their similarities. As the task description states, “If you can tell that americano is similar to capuccino and espresso but you can't tell the difference between them, you don't know what americano is.”
The ConceptNet Numberbatch embeddings only measure the similarity of terms, and we hypothesized that we would need to represent more specific relationships. For example, the input triple “frog, snail, legs” asks us to determine whether “legs” is an attribute that distinguishes “frog” from “snail”. The answer is yes, because a frog has legs while a snail does not. The has relationship is one example of a specific relationship that is represented in ConceptNet.
To capture this kind of specific relationship, we built a model that infers relations between ConceptNet nodes, trained on the existing edges in ConceptNet and random negative examples. There are many models designed for this purpose; the one we decided on is based on Semantic Matching Energy (SME) BIBREF3 .
Our features consisted of direct similarity over ConceptNet Numberbatch embeddings, the relationships inferred over ConceptNet by SME, features that compose ConceptNet with other resources (WordNet and Wikipedia), and a purely corpus-based feature that looks up two-word phrases in the Google Books dataset.
We combined these features based on ConceptNet with features extracted from a few other resources in a LinearSVC classifier, using liblinear BIBREF4 via scikit-learn BIBREF5 . The classifier used only 15 features, of which 12 ended up with non-zero weights, from the five sources described. We aimed to avoid complexity in the classifier in order to prevent overfitting to the validation set; the power of the classifier should be in its features.
The classifier produced by this design (submitted late to the contest leaderboard) successfully avoided overfitting. It performed better on the test set than on the validation set, with a test INLINEFORM0 score of 0.7368, whose margin of error overlaps with the evaluation's reported high score of 0.75.
At evaluation time, we accidentally submitted our results on the validation data, instead of the test data, to the SemEval leaderboard. Our code had truncated the results to the length of the test data, causing us to not notice the mismatch. This erroneous submission got a very low score, of course. This paper presents the corrected test results, which we submitted to the post-evaluation CodaLab leaderboard immediately after the results appeared. We did not change the classifier or data; the change was a one-line change to our code for outputting the classifier's predictions on the test set instead on the validation set.
Features
In detail, these are the five sources of features we used:
The Relational Inference Model
To infer truth values for ConceptNet relations, we use a variant of the Semantic Matching Energy model BIBREF3 , adapted to work well on ConceptNet's vocabulary of relations. Instead of embedding relations in the same space as the terms, this model assigns new 10-dimensional embeddings to ConceptNet relations, yielding a compact model for ConceptNet's relatively small set of relations.
The model is trained to distinguish positive examples of ConceptNet edges from negative ones. The positive examples are edges directly contained in ConceptNet, or those that are entailed by changing the relation to a more general one or switching the directionality of a symmetric relation. The negative examples come from replacing one of the terms with a random other term, the relation with a random unentailed relation, or switching the directionality of an asymmetric relation.
We trained this model for approximately 3 million iterations (about 4 days of computation on an nVidia Titan Xp) using PyTorch BIBREF9 . The code of the model is available at https://github.com/LuminosoInsight/conceptnet-sme.
To extract features for the discriminative attribute task, we focus on a subset of ConceptNet relations that would plausibly be used as attributes: RelatedTo, IsA, HasA, PartOf, CapableOf, UsedFor, HasContext, HasProperty, and AtLocation.
For most of these relations, the first argument is the term, and the second argument is the attribute. We use two additional features for PartOf and AtLocation with their arguments swapped, so that the attribute is the first argument. The generic relation RelatedTo, unlike the others, is intended to be symmetric, so we add its value to the value of its swapped version and use it as a single feature.
The Overfitting-Resistant Classifier
The classifier that we use to make a decision based on these features is scikit-learn's LinearSVC, using the default parameters in scikit-learn 0.19.1. (In Section SECREF4 , we discuss other models and parameters that we tried.) This classifier makes effective use of the features while being simple enough to avoid some amount of overfitting.
One aspect of the classifier that made a noticeable difference was the scaling of the features. We tried INLINEFORM0 and INLINEFORM1 -normalizing the columns of the input matrix, representing the values of each feature, and decided on INLINEFORM2 normalization.
We took advantage of the design of our features and the asymmetry of the task as a way to further mitigate overfitting. All of the features were designed to identify a property that INLINEFORM0 has and INLINEFORM1 does not, as is the case for the discriminative examples, so they should all make a non-negative contribution to a feature being discriminative. We can inspect the coefficients of the features in the SVC's decision boundary. If any feature gets a negative weight, it is likely a spurious result from overfitting to the training data. So, after training the classifier, we clip the coefficients of the decision boundary, setting all negative coefficients to zero.
If we were to remove these features and re-train, or require non-negative coefficients as a constraint on the classifier, then other features would inherently become responsible for overfitting. By neutralizing the features after training, we keep the features that are working well as they are, and remove a part of the model that appears to purely represent overfitting. Indeed, clipping the negative coefficients in this way increased our performance on the validation set.
Table TABREF8 shows the coeffcients assigned to each feature based on the training data.
Other experiments
There are other features that we tried and later discarded. We experimented with a feature similar to the Google Books 2-grams feature, based on the AOL query logs dataset BIBREF10 . It did not add to the performance, most likely because any information it could provide was also provided by Google Books 2-grams. Similiarly, we tried extending the Google Books 2-grams data to include the first and third words of a selection of 3-grams, but this, too, appeared redundant with the 2-grams.
We also experimented with a feature based on bounding box annotations available in the OpenImages dataset BIBREF11 . We hoped it would help us capture attributes such as colors, materials, and shapes. While this feature did not improve the classifier's performance on the validation set, it did slightly improve the performance on the test set.
Before deciding on scikit-learn's LinearSVC, we experimented with a number of other classifiers. This included random forests, differentiable models made of multiple ReLU and sigmoid layers, and SVM with an RBF kernel or a polynomial kernel.
We also experimented with different parameters to LinearSVC, such as changing the default value of the penalty parameter INLINEFORM0 of the error term, changing the penalty from INLINEFORM1 to INLINEFORM2 , solving the primal optimization problem instead of the dual problem, and changing the loss from squared hinge to hinge. These changes either led to lower performance or had no significant effect, so in the end we used LinearSVC with the default parameters for scikit-learn version 0.19.1.
Results
When trained on the training set, the classifier we describe achieved an INLINEFORM0 score of 0.7617 on the training set, 0.7281 on the validation set, and 0.7368 on the test set. Table TABREF9 shows these scores along with their standard error of the mean, supposing that these data sets were randomly sampled from larger sets.
Ablation Analysis
We performed an ablation analysis to see what the contribution of each of our five sources of features was. We evaluated classifiers that used all non-empty subsets of these sources. Figure FIGREF11 plots the results of these 31 classifiers when evaluated on the validation set and the test set.
It is likely that the classifier with all five sources (ABCDE) performed the best overall. It is in a statistical tie ( INLINEFORM0 ) with ABDE, the classifier that omits Wikipedia as a source.
Most of the classifiers perfomed better on the test set than on the validation set, as shown by the dotted line. Some simple classifiers with very few features performed particularly well on the test set. One surprisingly high-performing classifier was A (ConceptNet vector similarity), which gets a test INLINEFORM0 score of 0.7355 INLINEFORM1 0.0091. This is simple enough to be called a heuristic instead of a classifier, and we can express it in closed form. It is equivalent to this expression over ConceptNet Numberbatch embeddings: INLINEFORM2
where INLINEFORM0 .
It is interesting to note that source A (ConceptNet vector similarity) appears to dominate source B (ConceptNet SME) on the test data. SME led to improvements on the validation set, but on the test set, any classifier containing AB performs equal to or worse than the same classifier with B removed. This may indicate that the SME features were the most prone to overfitting, or that the validation set generally required making more difficult distinctions than the test set.
Reproducing These Results
The code for our classifier is available on GitHub at https://github.com/LuminosoInsight/semeval-discriminatt, and its input data is downloadable from https://zenodo.org/record/1183358. | direct similarity over ConceptNet Numberbatch embeddings, the relationships inferred over ConceptNet by SME, features that compose ConceptNet with other resources (WordNet and Wikipedia), and a purely corpus-based feature that looks up two-word phrases in the Google Books dataset |
6c4cd8da5b4b298f29af3123b58d9a5d4b02180b | 6c4cd8da5b4b298f29af3123b58d9a5d4b02180b_0 | Q: How big is the test set?
Text: Introduction
A requirement of scalable and practical question answering (QA) systems is the ability to reason over multiple documents and combine their information to answer questions. Although existing datasets enabled the development of effective end-to-end neural question answering systems, they tend to focus on reasoning over localized sections of a single document BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . For example, BIBREF4 find that 90% of the questions in the Stanford Question Answering Dataset are answerable given 1 sentence in a document. In this work, we instead focus on multi-evidence QA, in which answering the question requires aggregating evidence from multiple documents BIBREF5 , BIBREF6 .
Most of this work was done while Victor Zhong was at Salesforce Research.
Our multi-evidence QA model, the (), selects among a set of candidate answers given a set of support documents and a query. The is inspired by and . In , the model builds a coarse summary of support documents conditioned on the query without knowing what candidates are available, then scores each candidate. In , the model matches specific fine-grain contexts in which the candidate is mentioned with the query in order to gauge the relevance of the candidate. These two strategies of reasoning are respectively modeled by the coarse-grain and fine-grain modules of the . Each module employs a novel hierarchical attention — a hierarchy of coattention and self-attention — to combine information from the support documents conditioned on the query and candidates. Figure 1 illustrates the architecture of the .
The achieves a new result on the blind Qangaroo WikiHop test set of accuracy, beating previous best by accuracy despite not using pretrained contextual encoders. In addition, on the TriviaQA multi-paragraph question answering task BIBREF6 , reranking outputs from a traditional span extraction model BIBREF7 using the improves exact match accuracy by 3.1% and F1 by 3.0%.
Our analysis shows that components in the attention hierarchies of the coarse and fine-grain modules learn to focus on distinct parts of the input. This enables the to more effectively represent a large collection of long documents. Finally, we outline common types of errors produced by , caused by difficulty in aggregating large quantity of references, noise in distant supervision, and difficult relation types.
The coarse-grain module and fine-grain module of the correspond to and strategies. The coarse-grain module summarizes support documents without knowing the candidates: it builds codependent representations of support documents and the query using coattention, then produces a coarse-grain summary using self-attention. In contrast, the fine-grain module retrieves specific contexts in which each candidate occurs: it identifies coreferent mentions of the candidate, then uses coattention to build codependent representations between these mentions and the query. While low-level encodings of the inputs are shared between modules, we show that this division of labour allows the attention hierarchies in each module to focus on different parts of the input. This enables the model to more effectively represent a large number of potentially long support documents.
Suppose we are given a query, a set of $$ support documents, and a set of $$ candidates. Without loss of generality, let us consider the $i$ th document and the $j$ th candidate. Let $\in {\times }$ , $\in {\times }$ , and $\in {\times }$ respectively denote the word embeddings of the query, the $i$ th support document, and the $j$ th candidate answer. Here, $$ , $$0 , and $$1 are the number of words in the corresponding sequence. $$2 is the size of the word embedding. We begin by encoding each sequence using a bidirectional Gated Recurrent Units (GRUs) BIBREF8 .
$$&=& {\tanh (W_q + b_q) } \in {\times } \\ &=& {} \in {\times } \\ &=& {} \in {\times }$$ (Eq. 2)
Here, $$ , $$ , and $$ are the encodings of the query, support, and candidate. $W_q$ and $b_q$ are parameters of a query projection layer. $$ is the size of the bidirectional GRU.
Coarse-grain module
The coarse-grain module of the , shown in Figure 2 , builds codependent representations of support documents $$ and the query $$ using coattention, and then summarizes the coattention context using self-attention to compare it to the candidate $$ . Coattention and similar techniques are crucial to single-document question answering models BIBREF9 , BIBREF10 , BIBREF11 . We start by computing the affinity matrix between the document and the query as
$$= {\left( \right)} ^\intercal \in {\times }$$ (Eq. 5)
The support summary vectors and query summary vectors are defined as
$$&=& \left( \right) \in {\times } \\ &=& \left( ^\intercal \right) \in {\times }$$ (Eq. 6)
where $(X)$ normalizes $X$ column-wise. We obtain the document context as
$$&=& { ~\left( \right) } \in {\times }$$ (Eq. 7)
The coattention context is then the feature-wise concatenation of the document context $$ and the document summary vector $$ .
$$&=& [; ] \in {\times 2}$$ (Eq. 8)
For ease of exposition, we abbreviate coattention, which takes as input a document encoding $$ and a query encoding $$ and produces the coattention context $$ , as
$${}{} \rightarrow $$ (Eq. 9)
Next, we summarize the coattention context — a codependent encoding of the supporting document and the query — using hierarchical self-attention. First, we use self-attention to create a fixed-length summary vector of the coattention context. We compute a score for each position of the coattention context using a two-layer multi-layer perceptron (MLP). This score is normalized and used to compute a weighted sum over the coattention context.
$$a_{si} &=& \tanh \left( W_2 \tanh \left( W_1 U_{si} + b_1 \right) + b_2 \right) \in {} \\ \hat{a}_{s} &=& (a_{s}) \\ &=& \sum ^{}_i \hat{a}_{si} U_{si} \in {2}$$ (Eq. 10)
Here, $a_{si}$ and $\hat{a}_{si}$ are respectively the unnormalized and normalized score for the $i$ th position of the coattention context. $W_2$ , $b_2$ , $W_1$ , and $b_1$ are parameters for the MLP scorer. $U_{si}$ is the $i$ th position of the coattention context. We abbreviate self-attention, which takes as input a sequence $$ and produces the summary conditioned on the query $\hat{a}_{si}$0 , as
$${} \rightarrow $$ (Eq. 11)
Recall that $$ provides the summary of the $i$ th of $$ support documents. We apply another self-attention layer to compute a fixed-length summary vector of all support documents. This summary is then multiplied with the summary of the candidate answer to produce the coarse-grain score. Let $\in {\times 2}$ represent the sequence of summaries for all support documents. We have
$$G_c &=& {} \in {} \\ G^\prime &=& {} \in {2} \\ &=& \tanh \left( G^\prime + \right) G_c \in {}$$ (Eq. 12)
where $$ and $G_c$ are respectively the encoding and the self-attention summary of the candidate. $G^\prime $ is the fixed-length summary vector of all support documents. $$ and $$ are parameters of a projection layer that reduces the support documents summary from 2 to ${}$ .
Candidate-dependent fine-grain module
In contrast to the coarse-grain module, the fine-grain module, shown in Figure 3 , finds the specific context in which the candidate occurs in the supporting documents using coreference resolution . Each mention is then summarized using a self-attention layer to form a mention representation. We then compute the coattention between the mention representations and the query. This coattention context, which is a codependent encoding of the mentions and the query, is again summarized via self-attention to produce a fine-grain summary to score the candidate.
Let us assume that there are $m$ mentions of the candidate in the $i$ th support document. Let the $k$ th mention corresponds to the $$ to $$ tokens in the support document. We represent this mention using self-attention over the span of the support document encoding that corresponds to the mention.
$$_k = {[:]} \in {}$$ (Eq. 16)
Suppose that there are $$ mentions of the candidate in total. We extract each mention representation using self-attention to produce a sequence of mention representations $\in {\times }$ . The coattention context and summary of these mentions $$ with respect to the query $$ are
$$U_m &=& {M}{} \in {\times 2} \\ G_m &=& {U_m} \in {2}$$ (Eq. 17)
We use a linear layer to determine the fine-grain score of the candidate
$$= G_m + \in {}$$ (Eq. 18)
Score aggregation
We take the sum of the coarse-grain score and the fine-grain score, $= + $ , as the score for the candidate. Recall that our earlier presentation is with respect to the $j$ th out of $$ candidates. We combine each candidate score to form the final score vector $Y \in {}$ . The model is trained using cross-entropy loss.
Experiments
We evaluate the on two tasks to evaluate its effectiveness. The first task is multi-evidence question answering on the unmasked and masked version of the WikiHop dataset BIBREF5 . The second task is the multi-paragraph extractive question answering task TriviaQA, which we frame as a span reranking task BIBREF6 . On the former, the achieves a new result. On the latter, reranking the outputs of a span-extraction model BIBREF7 using the results in significant performance improvement.
Multi-evidence question answering on WikiHop
BIBREF5 proposed the Qangaroo WikiHop task to facilitate the study of multi-evidence question answering. This dataset is constructed by linking entities in a document corpus (Wikipedia) with a knowledge base (Wikidata). This produces a bipartite graph of documents and entities, an edge in which marks the occurrence of an entity in a document. A knowledge base fact triplet consequently corresponds to a path from the subject to the object in the resulting graph. The documents along this path compose the support documents for the fact triplet. The Qangaroo WikiHop task, shown in Figure 4 , is as follows: given a query, that is, the subject and relation of a fact triplet, a set of plausible candidate objects, and the corresponding support documents for the candidates, select the correct candidate as the answer.
The unmasked version of WikiHop represents candidate answers with original text while the masked version replaces them with randomly sampled placeholders in order to remove correlation between frequent answers and support documents. Official blind, held-out test evaluation is performed using the unmasked version. We tokenize the data using Stanford CoreNLP BIBREF12 . We use fixed GloVe embeddings BIBREF13 as well as character ngram embeddings BIBREF14 . We split symbolic query relations into words. All models are trained using ADAM BIBREF15 . We list detailed experiment setup and hyperparemeters of the best-performing model in of the Appendix.
We compare the performance of the to other models on the WikiHop leaderboard in Table 1 . The achieves results on both the masked and unmasked versions of WikiHop. In particular, on the blind, held-out WikiHop test set, the achieves a new best accuracy of . The previous result by BIBREF16 uses pretrained contextual encoders, which has led to consistent improvements across NLP tasks BIBREF19 . We outperform this result by despite not using pretrained contextual encoders . In addition, we show that the division of labour between the coarse-grain module and the fine-grain module allows the attention hierarchies of each module to focus on different parts of the input. This enables the to more effectively model the large collection of potentially long documents found in WikiHop.
Reranking extractive question answering on TriviaQA
To further study the effectiveness of our model, we also experiment on TriviaQA BIBREF6 , another large-scale question answering dataset that requires aggregating evidence from multiple sentences. Similar to BIBREF20 , BIBREF21 , we decompose the original TriviaQA task into two subtasks: proposing plausible candidate answers and reranking candidate answers.
We address the first subtask using BiDAF++, a competitive span extraction question answering model by BIBREF7 and the second subtask using the . To compute the candidate list for reranking, we obtain the top 50 answer candidates from BiDAF++. During training, we use the answer candidate that gives the maximum F1 as the gold label for training the CFC.
Our experimental results in Table 2 show that reranking using the CFC provides consistent performance gains over only using the span extraction question answering model. In particular, reranking using the improves performance regardless of whether the candidate answer set obtained from the span extraction model contains correct answers. On the whole TriviaQA dev set, reranking using the results in a gain of 3.1% EM and 3.0% F1, which suggests that the can be used to further refine the outputs produced by span extraction question answering models.
Ablation study
Table 3 shows the performance contributions of the coarse-grain module, the fine-grain module, as well as model decisions such as self-attention and bidirectional GRUs. Both the coarse-grain module and the fine-grain module significantly contribute to model performance. Replacing self-attention layers with mean-pooling and the bidirectional GRUs with unidirectional GRUs result in less performance degradation. Replacing the encoder with a projection over word embeddings result in significant performance drop, which suggests that contextual encodings that capture positional information is crucial to this task.
Figure 5 shows the distribution of model prediction errors across various lengths of the dataset for the coarse-grain-only model (-fine) and the fine-grain-only model (-coarse). The fine-grain-only model under-performs the coarse-grain-only model consistently across almost all length measures. This is likely due to the difficulty of coreference resolution of candidates in the support documents — the technique we use of exact lexical matching tends to produce high precision and low recall. However, the fine-grain-only model matches or outperforms the coarse-grain-only model on examples with a large number of support documents or with long support documents. This is likely because the entity-matching coreference resolution we employ captures intra-document and inter-document dependencies more precisely than hierarchical attention.
Qualitative analysis
We examine the hierarchical attention maps produced by the on examples from the WikiHop development set. We find that coattention layers consistently focus on phrases that are similar between the document and the query, while lower level self-attention layers capture phrases that characterize the entity described by the document. Because these attention maps are very large, we do not include them in the main text and instead refer readers to of the Appendix.
Coarse-grain summary self-attention, described in ( 12 ), tends to focus on support documents that present information relevant to the object in the query. Figure 6 illustrates an example of this in which the self-attention focuses on documents relevant to the literary work “The Troll”, namely those about The Troll, its author Julia Donaldson, and Old Norse.
In contrast, fine-grain coattention over mention representations, described in (), tends to focus on the relation part of the query. Figure 7 illustrates an example of this in which the coattention focuses on the relationship between the mentions and the phrase “located in the administrative territorial entity”. Attention maps of more examples can be found in of the Appendix.
Error Analysis
We examine errors the produced on the WikiHop development set and categorize them into four types. We list identifiers and examples of these errors in of the Appendix. The first type (% of errors) results from the model aggregating the wrong reference. For example, for the query country_of_citizenship jamie burnett, the model correctly attends to the documents about Jamie Burnett being born in South Larnarkshire and about Lanarkshire being in Scotland. However it wrongly focuses on the word “england” in the latter document instead of the answer “scotland”. We hypothesize that ways to reduce this type of error include using more robust pretrained contextual encoders BIBREF22 , BIBREF19 and coreference resolution. The second type (% of errors) results from questions that are not answerable. For example, the support documents do not provide the narrative location of the play “The Beloved Vagabond” for the query narrative_location the beloved vagabond. The third type (% of errors) results from queries that yield multiple correct answers. An example is the query instance_of qilakitsoq, for which the model predicts “archaeological site”, which is more specific than the answer “town”. The second and third types of errors underscore the difficulty of using distant supervision to create large-scale datasets such as WikiHop. The fourth type (% of errors) results from complex relation types such as parent_taxon which are difficult to interpret using pretrained word embeddings. One method to alleviate this type of errors is to embed relations using tunable symbolic embeddings as well as fixed word embeddings.
Conclusion
We presented , a new model for multi-evidence question answering inspired by and . On the WikiHop question answering task, the achieves test accuracy, outperforming previous methods by accuracy. We showed in our analysis that the complementary coarse-grain and fine-grain modules of the focus on different aspects of the input, and are an effective means to represent large collections of long documents.
Acknowledgement
The authors thank Luke Zettlemoyer for his feedback and advice and Sewon Min for her help in preprocessing the TriviaQA dataset. | Unanswerable |
ed4fb6bce855ca932548689e45fde21f26a71035 | ed4fb6bce855ca932548689e45fde21f26a71035_0 | Q: What is coattention?
Text: Introduction
A requirement of scalable and practical question answering (QA) systems is the ability to reason over multiple documents and combine their information to answer questions. Although existing datasets enabled the development of effective end-to-end neural question answering systems, they tend to focus on reasoning over localized sections of a single document BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . For example, BIBREF4 find that 90% of the questions in the Stanford Question Answering Dataset are answerable given 1 sentence in a document. In this work, we instead focus on multi-evidence QA, in which answering the question requires aggregating evidence from multiple documents BIBREF5 , BIBREF6 .
Most of this work was done while Victor Zhong was at Salesforce Research.
Our multi-evidence QA model, the (), selects among a set of candidate answers given a set of support documents and a query. The is inspired by and . In , the model builds a coarse summary of support documents conditioned on the query without knowing what candidates are available, then scores each candidate. In , the model matches specific fine-grain contexts in which the candidate is mentioned with the query in order to gauge the relevance of the candidate. These two strategies of reasoning are respectively modeled by the coarse-grain and fine-grain modules of the . Each module employs a novel hierarchical attention — a hierarchy of coattention and self-attention — to combine information from the support documents conditioned on the query and candidates. Figure 1 illustrates the architecture of the .
The achieves a new result on the blind Qangaroo WikiHop test set of accuracy, beating previous best by accuracy despite not using pretrained contextual encoders. In addition, on the TriviaQA multi-paragraph question answering task BIBREF6 , reranking outputs from a traditional span extraction model BIBREF7 using the improves exact match accuracy by 3.1% and F1 by 3.0%.
Our analysis shows that components in the attention hierarchies of the coarse and fine-grain modules learn to focus on distinct parts of the input. This enables the to more effectively represent a large collection of long documents. Finally, we outline common types of errors produced by , caused by difficulty in aggregating large quantity of references, noise in distant supervision, and difficult relation types.
The coarse-grain module and fine-grain module of the correspond to and strategies. The coarse-grain module summarizes support documents without knowing the candidates: it builds codependent representations of support documents and the query using coattention, then produces a coarse-grain summary using self-attention. In contrast, the fine-grain module retrieves specific contexts in which each candidate occurs: it identifies coreferent mentions of the candidate, then uses coattention to build codependent representations between these mentions and the query. While low-level encodings of the inputs are shared between modules, we show that this division of labour allows the attention hierarchies in each module to focus on different parts of the input. This enables the model to more effectively represent a large number of potentially long support documents.
Suppose we are given a query, a set of $$ support documents, and a set of $$ candidates. Without loss of generality, let us consider the $i$ th document and the $j$ th candidate. Let $\in {\times }$ , $\in {\times }$ , and $\in {\times }$ respectively denote the word embeddings of the query, the $i$ th support document, and the $j$ th candidate answer. Here, $$ , $$0 , and $$1 are the number of words in the corresponding sequence. $$2 is the size of the word embedding. We begin by encoding each sequence using a bidirectional Gated Recurrent Units (GRUs) BIBREF8 .
$$&=& {\tanh (W_q + b_q) } \in {\times } \\ &=& {} \in {\times } \\ &=& {} \in {\times }$$ (Eq. 2)
Here, $$ , $$ , and $$ are the encodings of the query, support, and candidate. $W_q$ and $b_q$ are parameters of a query projection layer. $$ is the size of the bidirectional GRU.
Coarse-grain module
The coarse-grain module of the , shown in Figure 2 , builds codependent representations of support documents $$ and the query $$ using coattention, and then summarizes the coattention context using self-attention to compare it to the candidate $$ . Coattention and similar techniques are crucial to single-document question answering models BIBREF9 , BIBREF10 , BIBREF11 . We start by computing the affinity matrix between the document and the query as
$$= {\left( \right)} ^\intercal \in {\times }$$ (Eq. 5)
The support summary vectors and query summary vectors are defined as
$$&=& \left( \right) \in {\times } \\ &=& \left( ^\intercal \right) \in {\times }$$ (Eq. 6)
where $(X)$ normalizes $X$ column-wise. We obtain the document context as
$$&=& { ~\left( \right) } \in {\times }$$ (Eq. 7)
The coattention context is then the feature-wise concatenation of the document context $$ and the document summary vector $$ .
$$&=& [; ] \in {\times 2}$$ (Eq. 8)
For ease of exposition, we abbreviate coattention, which takes as input a document encoding $$ and a query encoding $$ and produces the coattention context $$ , as
$${}{} \rightarrow $$ (Eq. 9)
Next, we summarize the coattention context — a codependent encoding of the supporting document and the query — using hierarchical self-attention. First, we use self-attention to create a fixed-length summary vector of the coattention context. We compute a score for each position of the coattention context using a two-layer multi-layer perceptron (MLP). This score is normalized and used to compute a weighted sum over the coattention context.
$$a_{si} &=& \tanh \left( W_2 \tanh \left( W_1 U_{si} + b_1 \right) + b_2 \right) \in {} \\ \hat{a}_{s} &=& (a_{s}) \\ &=& \sum ^{}_i \hat{a}_{si} U_{si} \in {2}$$ (Eq. 10)
Here, $a_{si}$ and $\hat{a}_{si}$ are respectively the unnormalized and normalized score for the $i$ th position of the coattention context. $W_2$ , $b_2$ , $W_1$ , and $b_1$ are parameters for the MLP scorer. $U_{si}$ is the $i$ th position of the coattention context. We abbreviate self-attention, which takes as input a sequence $$ and produces the summary conditioned on the query $\hat{a}_{si}$0 , as
$${} \rightarrow $$ (Eq. 11)
Recall that $$ provides the summary of the $i$ th of $$ support documents. We apply another self-attention layer to compute a fixed-length summary vector of all support documents. This summary is then multiplied with the summary of the candidate answer to produce the coarse-grain score. Let $\in {\times 2}$ represent the sequence of summaries for all support documents. We have
$$G_c &=& {} \in {} \\ G^\prime &=& {} \in {2} \\ &=& \tanh \left( G^\prime + \right) G_c \in {}$$ (Eq. 12)
where $$ and $G_c$ are respectively the encoding and the self-attention summary of the candidate. $G^\prime $ is the fixed-length summary vector of all support documents. $$ and $$ are parameters of a projection layer that reduces the support documents summary from 2 to ${}$ .
Candidate-dependent fine-grain module
In contrast to the coarse-grain module, the fine-grain module, shown in Figure 3 , finds the specific context in which the candidate occurs in the supporting documents using coreference resolution . Each mention is then summarized using a self-attention layer to form a mention representation. We then compute the coattention between the mention representations and the query. This coattention context, which is a codependent encoding of the mentions and the query, is again summarized via self-attention to produce a fine-grain summary to score the candidate.
Let us assume that there are $m$ mentions of the candidate in the $i$ th support document. Let the $k$ th mention corresponds to the $$ to $$ tokens in the support document. We represent this mention using self-attention over the span of the support document encoding that corresponds to the mention.
$$_k = {[:]} \in {}$$ (Eq. 16)
Suppose that there are $$ mentions of the candidate in total. We extract each mention representation using self-attention to produce a sequence of mention representations $\in {\times }$ . The coattention context and summary of these mentions $$ with respect to the query $$ are
$$U_m &=& {M}{} \in {\times 2} \\ G_m &=& {U_m} \in {2}$$ (Eq. 17)
We use a linear layer to determine the fine-grain score of the candidate
$$= G_m + \in {}$$ (Eq. 18)
Score aggregation
We take the sum of the coarse-grain score and the fine-grain score, $= + $ , as the score for the candidate. Recall that our earlier presentation is with respect to the $j$ th out of $$ candidates. We combine each candidate score to form the final score vector $Y \in {}$ . The model is trained using cross-entropy loss.
Experiments
We evaluate the on two tasks to evaluate its effectiveness. The first task is multi-evidence question answering on the unmasked and masked version of the WikiHop dataset BIBREF5 . The second task is the multi-paragraph extractive question answering task TriviaQA, which we frame as a span reranking task BIBREF6 . On the former, the achieves a new result. On the latter, reranking the outputs of a span-extraction model BIBREF7 using the results in significant performance improvement.
Multi-evidence question answering on WikiHop
BIBREF5 proposed the Qangaroo WikiHop task to facilitate the study of multi-evidence question answering. This dataset is constructed by linking entities in a document corpus (Wikipedia) with a knowledge base (Wikidata). This produces a bipartite graph of documents and entities, an edge in which marks the occurrence of an entity in a document. A knowledge base fact triplet consequently corresponds to a path from the subject to the object in the resulting graph. The documents along this path compose the support documents for the fact triplet. The Qangaroo WikiHop task, shown in Figure 4 , is as follows: given a query, that is, the subject and relation of a fact triplet, a set of plausible candidate objects, and the corresponding support documents for the candidates, select the correct candidate as the answer.
The unmasked version of WikiHop represents candidate answers with original text while the masked version replaces them with randomly sampled placeholders in order to remove correlation between frequent answers and support documents. Official blind, held-out test evaluation is performed using the unmasked version. We tokenize the data using Stanford CoreNLP BIBREF12 . We use fixed GloVe embeddings BIBREF13 as well as character ngram embeddings BIBREF14 . We split symbolic query relations into words. All models are trained using ADAM BIBREF15 . We list detailed experiment setup and hyperparemeters of the best-performing model in of the Appendix.
We compare the performance of the to other models on the WikiHop leaderboard in Table 1 . The achieves results on both the masked and unmasked versions of WikiHop. In particular, on the blind, held-out WikiHop test set, the achieves a new best accuracy of . The previous result by BIBREF16 uses pretrained contextual encoders, which has led to consistent improvements across NLP tasks BIBREF19 . We outperform this result by despite not using pretrained contextual encoders . In addition, we show that the division of labour between the coarse-grain module and the fine-grain module allows the attention hierarchies of each module to focus on different parts of the input. This enables the to more effectively model the large collection of potentially long documents found in WikiHop.
Reranking extractive question answering on TriviaQA
To further study the effectiveness of our model, we also experiment on TriviaQA BIBREF6 , another large-scale question answering dataset that requires aggregating evidence from multiple sentences. Similar to BIBREF20 , BIBREF21 , we decompose the original TriviaQA task into two subtasks: proposing plausible candidate answers and reranking candidate answers.
We address the first subtask using BiDAF++, a competitive span extraction question answering model by BIBREF7 and the second subtask using the . To compute the candidate list for reranking, we obtain the top 50 answer candidates from BiDAF++. During training, we use the answer candidate that gives the maximum F1 as the gold label for training the CFC.
Our experimental results in Table 2 show that reranking using the CFC provides consistent performance gains over only using the span extraction question answering model. In particular, reranking using the improves performance regardless of whether the candidate answer set obtained from the span extraction model contains correct answers. On the whole TriviaQA dev set, reranking using the results in a gain of 3.1% EM and 3.0% F1, which suggests that the can be used to further refine the outputs produced by span extraction question answering models.
Ablation study
Table 3 shows the performance contributions of the coarse-grain module, the fine-grain module, as well as model decisions such as self-attention and bidirectional GRUs. Both the coarse-grain module and the fine-grain module significantly contribute to model performance. Replacing self-attention layers with mean-pooling and the bidirectional GRUs with unidirectional GRUs result in less performance degradation. Replacing the encoder with a projection over word embeddings result in significant performance drop, which suggests that contextual encodings that capture positional information is crucial to this task.
Figure 5 shows the distribution of model prediction errors across various lengths of the dataset for the coarse-grain-only model (-fine) and the fine-grain-only model (-coarse). The fine-grain-only model under-performs the coarse-grain-only model consistently across almost all length measures. This is likely due to the difficulty of coreference resolution of candidates in the support documents — the technique we use of exact lexical matching tends to produce high precision and low recall. However, the fine-grain-only model matches or outperforms the coarse-grain-only model on examples with a large number of support documents or with long support documents. This is likely because the entity-matching coreference resolution we employ captures intra-document and inter-document dependencies more precisely than hierarchical attention.
Qualitative analysis
We examine the hierarchical attention maps produced by the on examples from the WikiHop development set. We find that coattention layers consistently focus on phrases that are similar between the document and the query, while lower level self-attention layers capture phrases that characterize the entity described by the document. Because these attention maps are very large, we do not include them in the main text and instead refer readers to of the Appendix.
Coarse-grain summary self-attention, described in ( 12 ), tends to focus on support documents that present information relevant to the object in the query. Figure 6 illustrates an example of this in which the self-attention focuses on documents relevant to the literary work “The Troll”, namely those about The Troll, its author Julia Donaldson, and Old Norse.
In contrast, fine-grain coattention over mention representations, described in (), tends to focus on the relation part of the query. Figure 7 illustrates an example of this in which the coattention focuses on the relationship between the mentions and the phrase “located in the administrative territorial entity”. Attention maps of more examples can be found in of the Appendix.
Error Analysis
We examine errors the produced on the WikiHop development set and categorize them into four types. We list identifiers and examples of these errors in of the Appendix. The first type (% of errors) results from the model aggregating the wrong reference. For example, for the query country_of_citizenship jamie burnett, the model correctly attends to the documents about Jamie Burnett being born in South Larnarkshire and about Lanarkshire being in Scotland. However it wrongly focuses on the word “england” in the latter document instead of the answer “scotland”. We hypothesize that ways to reduce this type of error include using more robust pretrained contextual encoders BIBREF22 , BIBREF19 and coreference resolution. The second type (% of errors) results from questions that are not answerable. For example, the support documents do not provide the narrative location of the play “The Beloved Vagabond” for the query narrative_location the beloved vagabond. The third type (% of errors) results from queries that yield multiple correct answers. An example is the query instance_of qilakitsoq, for which the model predicts “archaeological site”, which is more specific than the answer “town”. The second and third types of errors underscore the difficulty of using distant supervision to create large-scale datasets such as WikiHop. The fourth type (% of errors) results from complex relation types such as parent_taxon which are difficult to interpret using pretrained word embeddings. One method to alleviate this type of errors is to embed relations using tunable symbolic embeddings as well as fixed word embeddings.
Conclusion
We presented , a new model for multi-evidence question answering inspired by and . On the WikiHop question answering task, the achieves test accuracy, outperforming previous methods by accuracy. We showed in our analysis that the complementary coarse-grain and fine-grain modules of the focus on different aspects of the input, and are an effective means to represent large collections of long documents.
Acknowledgement
The authors thank Luke Zettlemoyer for his feedback and advice and Sewon Min for her help in preprocessing the TriviaQA dataset. | Unanswerable |
4cc5ba404d6a47363f119d9db7266157d3bb246b | 4cc5ba404d6a47363f119d9db7266157d3bb246b_0 | Q: What off-the-shelf QA model was used to answer sub-questions?
Text: Introduction
Question answering (QA) systems have become remarkably good at answering simple, single-hop questions but still struggle with compositional, multi-hop questions BIBREF0, BIBREF1. In this work, we examine if we can answer hard questions by leveraging our ability to answer simple questions. Specifically, we approach QA by breaking a hard question into a series of sub-questions that can be answered by a simple, single-hop QA system. The system's answers can then be given as input to a downstream QA system to answer the hard question, as shown in Fig. FIGREF1. Our approach thus answers the hard question in multiple, smaller steps, which can be easier than answering the hard question all at once. For example, it may be easier to answer “What profession do H. L. Mencken and Albert Camus have in common?” when given the answers to the sub-questions “What profession does H. L. Mencken have?” and “Who was Albert Camus?”
Prior work in learning to decompose questions into sub-questions has relied on extractive heuristics, which generalizes poorly to different domains and question types, and requires human annotation BIBREF2, BIBREF3. In order to scale to any arbitrary question, we would require sophisticated natural language generation capabilities, which often relies on large quantities of high-quality supervised data. Instead, we find that it is possible to learn to decompose questions without supervision.
Specifically, we learn to map from the distribution of hard questions to the distribution of simpler questions. First, we automatically construct a noisy, “pseudo-decomposition” for each hard question by retrieving relevant sub-question candidates based on their similarity to the given hard question. We retrieve candidates from a corpus of 10M simple questions that we extracted from Common Crawl. Second, we train neural text generation models on that data with (1) standard sequence-to-sequence learning and (2) unsupervised sequence-to-sequence learning. The latter has the advantage that it can go beyond the noisy pairing between questions and pseudo-decompositions. Fig. FIGREF2 overviews our decomposition approach.
We use decompositions to improve multi-hop QA. We first use an off-the-shelf single-hop QA model to answer decomposed sub-questions. We then give each sub-question and its answer as additional input to a multi-hop QA model. We test our method on HotpotQA BIBREF0, a popular multi-hop QA benchmark.
Our contributions are as follows. First, QA models relying on decompositions improve accuracy over a strong baseline by 3.1 F1 on the original dev set, 11 F1 on the multi-hop dev set from BIBREF4, and 10 F1 on the out-of-domain dev set from BIBREF3. Our most effective decomposition model is a 12-block transformer encoder-decoder BIBREF5 trained using unsupervised sequence-to-sequence learning, involving masked language modeling, denoising, and back-translation objectives BIBREF6. Second, our method is competitive with state-of-the-art methods SAE BIBREF7 and HGN BIBREF8 which leverage strong supervision. Third, we show that our approach automatically learns to generate useful decompositions for all 4 question types in HotpotQA, highlighting the general nature of our approach. In our analysis, we explore how sub-questions improve multi-hop QA, and we provide qualitative examples that highlight how question decomposition adds a form of interpretability to black-box QA models. Our ablations show that each component of our pipeline contributes to QA performance. Overall, we find that it is possible to successfully decompose questions without any supervision and that doing so improves QA.
Method
We now formulate the problem and overview our high-level approach, with details in the following section. We aim to leverage a QA model that is accurate on simple questions to answer hard questions, without using supervised question decompositions. Here, we consider simple questions to be “single-hop” questions that require reasoning over one paragraph or piece of evidence, and we consider hard questions to be “multi-hop.” Our aim is then to train a multi-hop QA model $M$ to provide the correct answer $a$ to a multi-hop question $q$ about a given a context $c$ (e.g., several paragraphs). Normally, we would train $M$ to maximize $\log p_M(a | c, q)$. To help $M$, we leverage a single-hop QA model that may be queried with sub-questions $s_1, \dots , s_N$, whose “sub-answers” to each sub-question $a_1, \dots , a_N$ may be provided to the multi-hop QA model. $M$ may then instead maximize the (potentially easier) objective $\log p_M(a | c, q, [s_1, a_1], \dots , [a_N, s_N])$.
Supervised decomposition models learn to map each question $q \in Q$ to a decomposition $d = [s_1; \dots ; s_N]$ of $N$ sub-questions $s_n \in S$ using annotated $(q, d)$ examples. In this work, we do not assume access to strong $(q, d)$ supervision. To leverage the single-hop QA model without supervision, we follow a three-stage approach: 1) map a question $q$ into sub-questions $s_1, \dots , s_N$ via unsupervised techniques, 2) find sub-answers $a_1, \dots , a_N$ with the single-hop QA model, and 3) provide $s_1, \dots , s_N$ and $a_1, \dots , a_N$ to help predict $a$.
Method ::: Unsupervised Question Decomposition
To train a decomposition model, we need appropriate training data. We assume access to a hard question corpus $Q$ and a simple question corpus $S$. Instead of using supervised $(q, d)$ training examples, we design an algorithm that constructs pseudo-decompositions $d^{\prime }$ to form $(q, d^{\prime })$ pairs from $Q$ and $S$ using an unsupervised approach (§SECREF4). We then train a model to map $q$ to a decomposition. We explore learning to decompose with standard and unsupervised sequence-to-sequence learning (§SECREF6).
Method ::: Unsupervised Question Decomposition ::: Creating Pseudo-Decompositions
For each $q \in Q$, we construct a pseudo-decomposition set $d^{\prime } = \lbrace s_1; \dots ; s_N\rbrace $ by retrieving simple question $s$ from $S$. We concatenate all $N$ simple questions in $d^{\prime }$ to form the pseudo-decomposition used downstream. $N$ may be chosen based on the task or vary based on $q$. To retrieve useful simple questions for answering $q$, we face a joint optimization problem. We want sub-questions that are both (i) similar to $q$ according to some metric $f$ and (ii) maximally diverse:
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose
Having now retrieved relevant pseudo-decompositions, we examine different ways to learn to decompose (with implementation details in the following section):
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose ::: No Learning
We use pseudo-decompositions directly, employing retrieved sub-questions in downstream QA.
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose ::: Sequence-to-Sequence (Seq2Seq)
We train a Seq2Seq model with parameters $\theta $ to maximize $\log p_{\theta }(d^{\prime } | q)$.
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose ::: Unsupervised Sequence-to-Sequence (USeq2Seq)
We start with paired $(q, d^{\prime })$ examples but do not learn from the pairing, because the pairing is noisy. We use unsupervised sequence-to-sequence learning to learn a $q \rightarrow d$ mapping instead of training directly on the noisy pairing.
Method ::: Answering Sub-Questions
To answer the generated sub-questions, we use an off-the-shelf QA model. The QA model may answer sub-questions using any free-form text (i.e., a word, phrase, sentence, etc.). Any QA model is suitable, so long as it can accurately answer simple questions in $S$. We thus leverage good accuracy on questions in $S$ to help QA models on questions in $Q$.
Method ::: QA using Decompositions
Downstream QA systems may use sub-questions and sub-answers in various ways. We add sub-questions and sub-answers as auxiliary input for a downstream QA model to incorporate in its processing. We now describe the implementation details of our approach outlined above.
Experimental Setup ::: Question Answering Task
We test unsupervised decompositions on HotpotQA BIBREF0, a standard benchmark for multi-hop QA. We use HotpotQA's “Distractor Setting,” which provides 10 context paragraphs from Wikipedia. Two (or more) paragraphs contain question-relevant sentences called “supporting facts,” and the remaining paragraphs are irrelevant, “distractor paragraphs.” Answers in HotpotQA are either yes, no, or a span of text in an input paragraph. Accuracy is measured with F1 and Exact Match (EM) scores between the predicted and gold spans.
Experimental Setup ::: Unsupervised Decomposition ::: Question Data
We use HotpotQA questions as our initial multi-hop, hard question corpus $Q$. We use SQuAD 2 questions as our initial single-hop, simple question corpus $S$. However, our pseudo-decomposition corpus should be large, as the corpus will be used to train neural Seq2Seq models, which are data hungry. A larger $|S|$ will also improve the relevance of retrieved simple questions to the hard question. Thus, we take inspiration from work in machine translation on parallel corpus mining BIBREF9, BIBREF10 and in unsupervised QA BIBREF11. We augment $Q$ and $S$ by mining more questions from Common Crawl. We choose sentences which start with common “wh”-words and end with “?” Next, we train a FastText classifier BIBREF12 to classify between 60K questions sampled from Common Crawl, SQuAD 2, and HotpotQA. Then, we classify Common Crawl questions, adding questions classified as SQuAD 2 questions to $S$ and questions classified as HotpotQA questions to $Q$. Question mining greatly increases the number of single-hop questions (130K $\rightarrow $ 10.1M) and multi-hop questions (90K $\rightarrow $ 2.4M). Thus, our unsupervised approach allows us to make use of far more data than supervised counterparts.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions
To create pseudo-decompositions, we set the number $N$ of sub-questions per question to 2, as questions in HotpotQA usually involve two reasoning hops. In Appendix §SECREF52, we discuss how our method works when $N$ varies per question.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions ::: Similarity-based Retrieval
To retrieve question-relevant sub-questions, we embed any text $t$ into a vector $\mathbf {v}_t$ by summing the FastText vectors BIBREF13 for words in $t$. We use cosine similarity as our similarity metric $f$. Let $q$ be a multi-hop question used to retrieve pseudo-decomposition $(s_1^*, s_2^*)$, and let $\hat{\mathbf {v}}$ be the unit vector of $\mathbf {v}$. Since $N=2$, Eq. DISPLAY_FORM5 reduces to:
The last term requires $O(|S|^2)$ comparisons, which is expensive as $|S|$ is large ($>$10M). Instead of solving Eq. (DISPLAY_FORM19) exactly, we find an approximate pseudo-decomposition $(s_1^{\prime }, s_2^{\prime })$ by computing Eq. (DISPLAY_FORM19) over $S^{\prime } = \operatornamewithlimits{topK}_{\lbrace s \in S\rbrace }\left[ \mathbf {\hat{v}}_{q}^{\top } \mathbf {\hat{v}}_s\right]$, using $K=1000$. We use FAISS BIBREF14 to efficiently build $S^{\prime }$.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions ::: Random Retrieval
For comparison, we test random pseudo-decompositions, where we randomly retrieve $s_1, \dots , s_N$ by sampling from $S$. USeq2Seq trained on random $d^{\prime } = [s_1; \dots ; s_N]$ should at minimum learn to map $q$ to multiple simple questions.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions ::: Editing Pseudo-Decompositions
Since the sub-questions are retrieval-based, the sub-questions are often not about the same entities as $q$. As a post-processing step, we replace entities in $(s^{\prime }_1, s^{\prime }_2)$ with entities from $q$. We find all entities in $(s^{\prime }_1, s^{\prime }_2)$ that do not appear in $q$ using spaCy BIBREF15. We replace these entities with a random entity from $q$ with the same type (e.g., “Date” or “Location”) if and only if one exists. We use entity replacement on pseudo-decompositions from both random and similarity-based retrieval.
Experimental Setup ::: Unsupervised Decomposition ::: Unsupervised Decomposition Models ::: Pre-training
Pre-training is a key ingredient for unsupervised Seq2Seq methods BIBREF16, BIBREF17, so we initialize all decomposition models with the same pre-trained weights, regardless of training method (Seq2Seq or USeq2Seq). We warm-start our pre-training with the pre-trained, English Masked Language Model (MLM) from BIBREF6, a 12-block decoder-only transformer model BIBREF5 trained to predict masked-out words on Toronto Books Corpus BIBREF18 and Wikipedia. We train the model with the MLM objective for one epoch on the augmented corpus $Q$ (2.4 M questions), while also training on decompositions $D$ formed via random retrieval from $S$. For our pre-trained encoder-decoder, we initialize a 6-block encoder with the first 6 MLM blocks, and we initialize a 6-block decoder with the last 6 MLM blocks, randomly initializing the remaining weights as in BIBREF6.
Experimental Setup ::: Unsupervised Decomposition ::: Unsupervised Decomposition Models ::: Seq2Seq
We fine-tune the pre-trained encoder-decoder using maximum likelihood. We stop training based on validation BLEU BIBREF19 between generated decompositions and pseudo-decompositions.
Experimental Setup ::: Unsupervised Decomposition ::: Unsupervised Decomposition Models ::: USeq2Seq
We follow the approach by BIBREF6 in unsupervised translation. Training follows two stages: (1) MLM pre-training on the training corpora (described above), followed by (2) training simultaneously with denoising and back-translation objectives. For denoising, we produce a noisy input $\hat{d}$ by randomly masking, dropping, and locally shuffling tokens in $d \sim D$, and we train a model with parameters $\theta $ to maximize $\log p_{\theta }(d | \hat{d})$. We likewise maximize $\log p_{\theta }(q | \hat{q})$. For back-translation, we generate a multi-hop question $\hat{q}$ for a decomposition $d \sim D$, and we maximize $\log p_{\theta }(d | \hat{q})$. Similarly, we maximize $\log p_{\theta }(q | \hat{d})$. To stop training without supervision, we use a modified version of round-trip BLEU BIBREF17 (see Appendix §SECREF56 for details). We train with denoising and back-translation on smaller corpora of HotpotQA questions ($Q$) and their pseudo-decompositions ($D$).
Experimental Setup ::: Single-hop Question Answering Model
We train our single-hop QA model following prior work from BIBREF3 on HotpotQA.
Experimental Setup ::: Single-hop Question Answering Model ::: Model Architecture
We fine-tune a pre-trained model to take a question and several paragraphs and predicts the answer, similar to the single-hop QA model from BIBREF21. The model computes a separate forward pass on each paragraph (with the question). For each paragraph, the model learns to predict the answer span if the paragraph contains the answer and to predict “no answer” otherwise. We treat yes and no predictions as spans within the passage (prepended to each paragraph), as in BIBREF22 on HotpotQA. During inference, for the final softmax, we consider all paragraphs as a single chunk. Similar to BIBREF23, we subtract a paragraph's “no answer” logit from the logits of all spans in that paragraph, to reduce or increase span probabilities accordingly. In other words, we compute the probability $p(s_p)$ of each span $s_p$ in a paragraph $p \in \lbrace 1, \dots , P \rbrace $ using the predicted span logit $l(s_p)$ and “no answer” paragraph logit $n(p)$ as follows:
We use $\textsc {RoBERTa}_{\textsc {LARGE}}$ BIBREF24 as our pre-trained initialization. Later, we also experiment with using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3.
Experimental Setup ::: Single-hop Question Answering Model ::: Training Data and Ensembling
Similar to BIBREF3, we train an ensemble of 2 single-hop QA models using data from SQuAD 2 and HotpotQA questions labeled as “easy” (single-hop). To ensemble, we average the logits of the two models before predicting the answer. SQuAD is a single-paragraph QA task, so we adapt SQuAD to the multi-paragraph setting by retrieving distractor paragraphs from Wikipedia for each question. We use the TFIDF retriever from DrQA BIBREF25 to retrieve 2 distractor paragraphs, which we add to the input for one model in the ensemble. We drop words from the question with a 5% probability to help the model handle any ill-formed sub-questions. We use the single-hop QA ensemble as a black-box model once trained, never training the model on multi-hop questions.
Experimental Setup ::: Single-hop Question Answering Model ::: Returned Text
We have the single-hop QA model return the sentence containing the model's predicted answer span, alongside the sub-questions. Later, we compare against alternatives, i.e., returning the predicted answer span without its context or not returning sub-questions.
Experimental Setup ::: Multi-hop Question Answering Model
Our multi-hop QA architecture is identical to the single-hop QA model, but the multi-hop QA model also uses sub-questions and sub-answers as input. We append each (sub-question, sub-answer) pair in order to the multi-hop question along with separator tokens. We train one multi-hop QA model on all of HotpotQA, also including SQuAD 2 examples used to train the single-hop QA model. Later, we experiment with using $\textsc {BERT}_{\textsc {LARGE}}$ and $\textsc {BERT}_{\textsc {BASE}}$ instead of $\textsc {RoBERTa}_{\textsc {LARGE}}$ as the multi-hop QA model. All reported error margins show the mean and std. dev. across 5 multi-hop QA training runs using the same decompositions.
Results on Question Answering
We compare variants of our approach that use different learning methods and different pseudo-aligned training sets. As a baseline, we compare RoBERTa with decompositions to a RoBERTa model that does not use decompositions but is identical in all other respects. We train the baseline for 2 epochs, sweeping over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $; we choose the hyperparameters that perform best on our dev set. We then use the best hyperparameters for the baseline to train our RoBERTa models with decompositions.
We report results on 3 versions of the dev set: (1) the original version, (2) the multi-hop version from BIBREF4 which created some distractor paragraphs adversarially to test multi-hop reasoning, and (3) the out-of-domain version from BIBREF3 which retrieved distractor paragraphs using the same procedure as the original version, but excluded paragraphs in the original version.
Results on Question Answering ::: Main Results
Table shows how unsupervised decompositions affect QA. Our RoBERTa baseline performs quite well on HotpotQA (77.0 F1), despite processing each paragraph separately, which prohibits inter-paragraph reasoning. The result is in line with prior work which found that a version of our baseline QA model using BERT BIBREF26 does well on HotpotQA by exploiting single-hop reasoning shortcuts BIBREF21. We achieve significant gains over our strong baseline by leveraging decompositions from our best decomposition model, trained with USeq2Seq on FastText pseudo-decompositions; we find a 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, and 10 F1 gain on the out-of-domain dev set. Unsupervised decompositions even match the performance of using (within our pipeline) supervised and heuristic decompositions from DecompRC (i.e., 80.1 vs. 79.8 F1 on the original dev set).
More generally, all decomposition methods improve QA over the baseline by leveraging the single-hop QA model (“1hop” in Table ). Using FastText pseudo-decompositions as sub-questions directly improves QA over using random sub-questions on the multi-hop set (72.4 vs. 70.9 F1) and out-of-domain set (72.0 vs. 70.7 F1). USeq2Seq on random pseudo-decompositions also improves over the random sub-question baseline (e.g., 79.8 vs. 78.4 F1 on HotpotQA). However, we only find small improvements when training USeq2Seq on FastText vs. Random pseudo-decompositions (e.g., 77.1 vs. 76.5 F1 on the out-of-domain dev set).
The best decomposition methods learn with USeq2Seq. Using Seq2Seq to generate decompositions gives similar QA accuracy as the “No Learning” setup, e.g. both approaches achieve 78.9 F1 on the original dev set for FastText pseudo-decompositions. The results are similar perhaps since supervised learning is directly trained to place high probability on pseudo-decompositions. USeq2Seq may improve over Seq2Seq by learning to align hard questions and pseudo-decompositions while ignoring the noisy pairing.
After our experimentation, we chose USeq2Seq trained on FastText pseudo-decompositions as the final model, and we submitted the model for hidden test evaluation. Our approach achieved a test F1 of 79.34 and Exact Match (EM) of 66.33. Our approach is competitive with concurrent, state-of-the-art systems SAE BIBREF7 and HGN BIBREF8, which both (unlike our approach) learn from additional, strong supervision about which sentences are necessary to answer the question.
Results on Question Answering ::: Question Type Breakdown
To understand where decompositions help, we break down QA performance across 4 question types from BIBREF3. “Bridge” questions ask about an entity not explicitly mentioned in the question (“When was Erik Watts' father born?”). “Intersection” questions ask to find an entity that satisfies multiple separate conditions (“Who was on CNBC and Fox News?”). “Comparison” questions ask to compare a property of two entities (“Which is taller, Momhil Sar or K2?”). “Single-hop” questions are likely answerable using single-hop shortcuts or single-paragraph reasoning (“Where is Electric Six from?”). We split the original dev set into the 4 types using the supervised type classifier from BIBREF3. Table shows F1 scores for RoBERTa with and without decompositions across the 4 types.
Unsupervised decompositions improve QA across all question types. Our single decomposition model generates useful sub-questions for all question types without special case handling, unlike earlier work from BIBREF3 which handled each question type separately. For single-hop questions, our QA approach does not require falling back to a single-hop QA model and instead learns to leverage decompositions to better answer questions with single-hop shortcuts (76.9 vs. 73.9 F1 without decompositions).
Results on Question Answering ::: Answers to Sub-Questions are Crucial
To measure the usefulness of sub-questions and sub-answers, we train the multi-hop QA model with various, ablated inputs, as shown in Table . Sub-answers are crucial to improving QA, as sub-questions with no answers or random answers do not help (76.9 vs. 77.0 F1 for the baseline). Only when sub-answers are provided do we see improved QA, with or without sub-questions (80.1 and 80.2 F1, respectively). It is important to provide the sentence containing the predicted answer span instead of the answer span alone (80.1 vs. 77.8 F1, respectively), though the answer span alone still improves over the baseline (77.0 F1).
Results on Question Answering ::: How Do Decompositions Help?
Decompositions help to answer questions by retrieving important supporting evidence to answer questions. Fig. FIGREF41 shows that multi-hop QA accuracy increases when the sub-answer sentences are the “supporting facts” or sentences needed to answer the question, as annotated by HotpotQA. We retrieve supporting facts without learning to predict them with strong supervision, unlike many state-of-the-art models BIBREF7, BIBREF8, BIBREF22.
Results on Question Answering ::: Example Decompositions
To illustrate how decompositions help QA, Table shows example sub-questions from our best decomposition model with predicted sub-answers. Sub-questions are single-hop questions relevant to the multi-hop question. The single-hop QA model returns relevant sub-answers, sometimes in spite of grammatical errors (Q1, SQ$_1$) or under-specified questions (Q2, SQ$_1$). The multi-hop QA model then returns an answer consistent with the predicted sub-answers. The decomposition model is largely extractive, copying from the multi-hop question rather than hallucinating new entities, which helps generate relevant sub-questions. To better understand our system, we analyze the model for each stage: decomposition, single-hop QA, and multi-hop QA.
Analysis ::: Unsupervised Decomposition Model ::: Intrinsic Evaluation of Decompositions
We evaluate the quality of decompositions on other metrics aside from downstream QA. To measure the fluency of decompositions, we compute the likelihood of decompositions using the pre-trained GPT-2 language model BIBREF27. We train a classifier on the question-wellformedness dataset of BIBREF28, and we use the classifier to estimate the proportion of sub-questions that are well-formed. We measure how abstractive decompositions are by computing (i) the token Levenstein distance between the multi-hop question and its generated decomposition and (ii) the ratio between the length of the decomposition and the length of the multi-hop question. We compare our best decomposition model against the supervised+heuristic decompositions from DecompRC BIBREF3 in Table .
Unsupervised decompositions are both more natural and well-formed than decompositions from DecompRC. Unsupervised decompositions are also closer in edit distance and length to the multi-hop question, consistent with our observation that our decomposition model is largely extractive.
Analysis ::: Unsupervised Decomposition Model ::: Quality of Decomposition Model
Another way to test the quality of the decomposition model is to test if the model places higher probability on decompositions that are more helpful for downstream QA. We generate $N=5$ hypotheses from our best decomposition model using beam search, and we train a multi-hop QA model to use the $n$th-ranked hypothesis as a question decomposition (Fig. FIGREF46, left). QA accuracy decreases as we use lower probability decompositions, but accuracy remains relatively robust, at most decreasing from 80.1 to 79.3 F1. The limited drop suggests that decompositions are still useful if they are among the model's top hypotheses, another indication that our model is trained well for decomposition.
Analysis ::: Single-hop Question Answering Model ::: Sub-Answer Confidence
Figure FIGREF46 (right) shows that the model's sub-answer confidence correlates with downstream multi-hop QA performance for all HotpotQA dev sets. A low confidence sub-answer may be indicative of (i) an unanswerable or ill-formed sub-question or (ii) a sub-answer that is more likely to be incorrect. In both cases, the single-hop QA model is less likely to retrieve the useful supporting evidence to answer the multi-hop question.
Analysis ::: Single-hop Question Answering Model ::: Changing the Single-hop QA Model
We find that our approach is robust to the single-hop QA model that answers sub-questions. We use the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 as the single-hop QA model. The model performs much worse compared to our $\textsc {RoBERTa}_{\textsc {LARGE}}$ single-hop ensemble when used directly on HotpotQA (56.3 vs. 66.7 F1). However, the model results in comparable QA when used to answer single-hop sub-questions within our larger system (79.9 vs. 80.1 F1 for our $\textsc {RoBERTa}_{\textsc {LARGE}}$ ensemble).
Analysis ::: Multi-hop Question Answering Model ::: Varying the Base Model
To understand how decompositions impact performance as the multi-hop QA model gets stronger, we vary the base pre-trained model. Table shows the impact of adding decompositions to $\textsc {BERT}_{\textsc {BASE}}$ , $\textsc {BERT}_{\textsc {LARGE}}$ , and finally $\textsc {RoBERTa}_{\textsc {LARGE}}$ (see Appendix §SECREF64 for hyperparameters). The gain from using decompositions grows with strength of the multi-hop QA model. Decompositions improve QA by 1.2 F1 for a $\textsc {BERT}_{\textsc {BASE}}$ model, by 2.6 F1 for the stronger $\textsc {BERT}_{\textsc {LARGE}}$ model, and by 3.1 F1 for our best $\textsc {RoBERTa}_{\textsc {LARGE}}$ model.
Related Work
Answering complicated questions has been a long-standing challenge in natural language processing. To this end, prior work has explored decomposing questions with supervision or heuristic algorithms. IBM Watson BIBREF29 decomposes questions into sub-questions in multiple ways or not at all. DecompRC BIBREF3 largely frames sub-questions as extractive spans of a multi-hop question, learning to predict span-based sub-questions via supervised learning on human annotations. In other cases, DecompRC decomposes a multi-hop question using a heuristic algorithm, or DecompRC does not decompose at all. Watson and DecompRC use special case handling to decompose different questions, while our algorithm is fully automated and requires minimal hand-engineering.
More traditional, semantic parsing methods map questions to compositional programs, whose sub-programs can be viewed as question decompositions in a formal language BIBREF2, BIBREF30. Examples include classical QA systems like SHRDLU BIBREF31 and LUNAR BIBREF32, as well as neural Seq2Seq semantic parsers BIBREF33 and neural module networks BIBREF34, BIBREF35. Such methods usually require strong, program-level supervision to generate programs, as in visual QA BIBREF36 and on HotpotQA BIBREF37. Some models use other forms of strong supervision, e.g. predicting the “supporting evidence” to answer a question annotated by HotpotQA. Such an approach is taken by SAE BIBREF7 and HGN BIBREF8, whose methods may be combined with our approach.
Unsupervised decomposition complements strongly and weakly supervised decomposition approaches. Our unsupervised approach enables methods to leverage millions of otherwise unusable questions, similar to work on unsupervised QA BIBREF11. When decomposition examples exist, supervised and unsupervised learning can be used in tandem to learn from both labeled and unlabeled examples. Such semi-supervised methods outperform supervised learning for tasks like machine translation BIBREF38. Other work on weakly supervised question generation uses a downstream QA model's accuracy as a signal for learning to generate useful questions. Weakly supervised question generation often uses reinforcement learning BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, where an unsupervised initialization can greatly mitigate the issues of exploring from scratch BIBREF44.
Conclusion
We proposed an algorithm that decomposes questions without supervision, using 3 stages: (1) learning to decompose using pseudo-decompositions without supervision, (2) answering sub-questions with an off-the-shelf QA system, and (3) answering hard questions more accurately using sub-questions and their answers as additional input. When evaluated on HotpotQA, a standard benchmark for multi-hop QA, our approach significantly improved accuracy over an equivalent model that did not use decompositions. Our approach relies only on the final answer as supervision but works as effectively as state-of-the-art methods that rely on strong supervision, such as supporting fact labels or example decompositions. Qualitatively, we found that unsupervised decomposition resulted in fluent sub-questions whose answers often match the annotated supporting facts in HotpotQA. Our unsupervised decompositions are largely extractive, which is effective for compositional, multi-hop questions but not all complex questions, showing room for future work. Overall, this work opens up exciting avenues for leveraging methods in unsupervised learning and natural language generation to improve the interpretability and generalization of machine learning systems.
Acknowledgements
EP is supported by the NSF Graduate Research Fellowship. KC is supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). KC also thanks eBay and NVIDIA for their support. We thank Paul Christiano, Sebastian Riedel, He He, Jonathan Berant, Alexis Conneau, Jiatao Gu, Sewon Min, Yixin Nie, Lajanugen Logeswaran, and Adam Fisch for helpful feedback, as well as Yichen Jiang and Peng Qi for help with evaluation.
Pseudo-Decompositions
Tables - show examples of pseudo-decompositions and learned decompositions from various models.
Pseudo-Decompositions ::: Variable Length Pseudo-Decompositions
In §SECREF15, we leveraged domain knowledge about the task to fix the pseudo-decomposition length $N=2$. A general algorithm for creating pseudo-decompositions should find a suitable $N$ for each question. We find that Eq. DISPLAY_FORM5 in SECREF4 always results in decompositions of length $N=2$, as the regularization term grows quickly with $N$. Thus, we test another formulation based on Euclidean distance:
We create pseudo-decompositions in an similar way as before, first finding a set of candidate sub-questions $S^{\prime } \in S$ with high cosine similarity to $\mathbf {v}_q$, then performing beam search up to a maximum value of $N$. We test pseudo-decomposition formulations by creating synthetic compositional questions by combining 2-3 single-hop questions with “and.” We then measure the ranking of the correct decomposition (a concatenation of the single-hop questions). For $N=2$, both methods perform well, but Eq. DISPLAY_FORM5 does not work for decompositions where $N=3$, whereas Eq. DISPLAY_FORM53 does, achieving a mean reciprocal rank of 30%. However, Eq. DISPLAY_FORM5 outperforms Eq. DISPLAY_FORM53 on HotpotQA, e.g., achieving 79.9 vs. 79.4 F1 when using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 to answer sub-questions. Eq. DISPLAY_FORM5 is also faster to compute and easier to scale. Moreover, Eq. DISPLAY_FORM53 requires an embedding space where summing sub-question representations is meaningful, whereas Eq. DISPLAY_FORM5 only requires embeddings that encode semantic similarity. Thus, we adopt Eq. DISPLAY_FORM5 for our main experiments. Table contains an example where the variable length decomposition method mentioned above produces a three-subquestion decomposition whereas the other methods are fixed to two subquestions.
Pseudo-Decompositions ::: Impact of Question Corpus Size
In addition to our previous results on FastText vs. Random pseudo-decompositions, we found it important to use a large question corpus to create pseudo-decompositions. QA F1 increased from 79.2 to 80.1 when we trained decomposition models on pseudo-decompositions comprised of questions retrieved from Common Crawl ($>$10M questions) rather than only SQuAD 2 ($\sim $130K questions), using an appropriately larger beam size (100 $\rightarrow $ 1000).
Pseudo-Decompositions ::: Pseudo-Decomposition Retrieval Method
Table shows QA results with pseudo-decompositions retrieved using sum-bag-of-word representations from FastText, TFIDF, $\textsc {BERT}_{\textsc {LARGE}}$ first layer hidden states. We also vary the learning method and include results Curriculum Seq2Seq (CSeq2Seq), where we initialize the USeq2Seq approach with the Seq2Seq model trained on the same data.
Unsupervised Decomposition Model ::: Unsupervised Stopping Criterion
To stop USeq2Seq training, we use an unsupervised stopping criterion to avoid relying on a supervised validation set of decompositions. We generate a decomposition $\hat{d}$ for a multi-hop question $q$, and we measure BLEU between $q$ and the model-generated question $\hat{q}$ for $\hat{d}$, similar to round-trip BLEU in unsupervised translation BIBREF17. We scale round-trip BLEU score by the fraction of “good” decompositions, where a good decomposition has (1) 2 sub-questions (question marks), (2) no sub-question which contains all words in the multi-hop question, and (3) no sub-question longer than the multi-hop question. Without scaling, decomposition models achieve perfect round-trip BLEU by copying the multi-hop question as the decomposition. We measure scaled BLEU across multi-hop questions in HotpotQA dev, and we stop training when the metric does not increase for 3 consecutive epochs.
It is possible to stop training the decomposition model based on downstream QA accuracy. However, training a QA model on each decomposition model checkpoint (1) is computationally expensive and (2) ties decompositions to a specific, downstream QA model. In Figure FIGREF57, we show downstream QA results across various USeq2Seq checkpoints when using the $\textsc {BERT}_{\textsc {BASE}}$ single-hop QA ensemble from BIBREF3. The unsupervised stopping criterion does not significantly hurt downstream QA compared to using a weakly-supervised stopping criterion.
Unsupervised Decomposition Model ::: Training Hyperparameters ::: MLM Pre-training
We pre-train our encoder-decoder distributed across 8 DGX-1 machines, each with 8, 32GB NVIDIA V100 GPUs interconnected by Infiniband. We pre-train using the largest possible batch size (1536), and we choose the best learning rate ($3 \times 10^{-5}$) based on training loss after a small number of iterations. We chose a maximum sequence length of 128. We keep other hyperparameters identical to those from BIBREF6 used in unsupervised translation.
Unsupervised Decomposition Model ::: Training Hyperparameters ::: USeq2Seq
We train each decomposition model with distributed training across 8, 32GB NVIDIA V100 GPUs. We chose the largest possible batch size (256) and then the largest learning rate which resulted in stable training ($3 \times 10^{-5}$). Other hyperparameters are the same as BIBREF6.
Unsupervised Decomposition Model ::: Training Hyperparameters ::: Seq2Seq
We use a large batch size (1024) and chose the largest learning rate which resulted in stable training across many pseudo-decomposition training corpora ($1 \times 10^{-4}$). We keep other training settings and hyperparameters the same as for USeq2Seq.
Multi-hop QA Model ::: Varying the Number of Training Examples
To understand how decompositions impact performance given different amounts of training data, we vary the number of multi-hop training examples. We use the “medium” and “hard” level labels in HotpotQA to determine which examples are multi-hop. We consider training setups where the multi-hop QA model does or does not use data augmentation via training on hotpot “easy”/single-hop questions and SQuAD 2 questions. Fig. FIGREF63 shows the results. Decompositions improve QA, so long as the multi-hop QA model has enough training data, either via single-hop QA examples or enough multi-hop QA examples.
Multi-hop QA Model ::: Training Hyperparameters
To train $\textsc {RoBERTa}_{\textsc {LARGE}}$ , we fix the number of training epochs to 2, as training longer did not help. We sweep over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $, similar to the ranges used in the original paper BIBREF24. We chose the hyperparameters that did best for the baseline QA model (without decompositions) on our validation set: batch size 64, learning rate $1.5 \times 10^{-5}$, and weight decay $0.01$. Similarly, for the experiments with BERT, we fix the number of epochs to 2 and choose hyperparameters by sweeping over the recommended ranges from BIBREF26 for learning rate ($\lbrace 2 \times 10^{-5}, 3 \times 10^{-5}, 5 \times 10^{-5}\rbrace $) and batch size ($\lbrace 16, 32\rbrace $). For $\textsc {BERT}_{\textsc {BASE}}$ , we thus choose learning rate $2 \times 10^{-5}$ and batch size 16, and for $\textsc {BERT}_{\textsc {LARGE}}$ , we use the whole-word masking model with learning rate $2 \times 10^{-5}$ and batch size 32. We train all QA models with mixed precision floating point arithmetic BIBREF45, distributing training across 8, 32GB NVIDIA V100 GPUs.
Multi-hop QA Model ::: Improvements across Detailed Question Types
To better understand where decompositions improve QA, we show the improvement across various fine-grained splits of the evaluation sets in Figures FIGREF66-FIGREF70. | $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 |
1d72770d075b22411ec86d8bdee532f8c643740b | 1d72770d075b22411ec86d8bdee532f8c643740b_0 | Q: How large is the improvement over the baseline?
Text: Introduction
Question answering (QA) systems have become remarkably good at answering simple, single-hop questions but still struggle with compositional, multi-hop questions BIBREF0, BIBREF1. In this work, we examine if we can answer hard questions by leveraging our ability to answer simple questions. Specifically, we approach QA by breaking a hard question into a series of sub-questions that can be answered by a simple, single-hop QA system. The system's answers can then be given as input to a downstream QA system to answer the hard question, as shown in Fig. FIGREF1. Our approach thus answers the hard question in multiple, smaller steps, which can be easier than answering the hard question all at once. For example, it may be easier to answer “What profession do H. L. Mencken and Albert Camus have in common?” when given the answers to the sub-questions “What profession does H. L. Mencken have?” and “Who was Albert Camus?”
Prior work in learning to decompose questions into sub-questions has relied on extractive heuristics, which generalizes poorly to different domains and question types, and requires human annotation BIBREF2, BIBREF3. In order to scale to any arbitrary question, we would require sophisticated natural language generation capabilities, which often relies on large quantities of high-quality supervised data. Instead, we find that it is possible to learn to decompose questions without supervision.
Specifically, we learn to map from the distribution of hard questions to the distribution of simpler questions. First, we automatically construct a noisy, “pseudo-decomposition” for each hard question by retrieving relevant sub-question candidates based on their similarity to the given hard question. We retrieve candidates from a corpus of 10M simple questions that we extracted from Common Crawl. Second, we train neural text generation models on that data with (1) standard sequence-to-sequence learning and (2) unsupervised sequence-to-sequence learning. The latter has the advantage that it can go beyond the noisy pairing between questions and pseudo-decompositions. Fig. FIGREF2 overviews our decomposition approach.
We use decompositions to improve multi-hop QA. We first use an off-the-shelf single-hop QA model to answer decomposed sub-questions. We then give each sub-question and its answer as additional input to a multi-hop QA model. We test our method on HotpotQA BIBREF0, a popular multi-hop QA benchmark.
Our contributions are as follows. First, QA models relying on decompositions improve accuracy over a strong baseline by 3.1 F1 on the original dev set, 11 F1 on the multi-hop dev set from BIBREF4, and 10 F1 on the out-of-domain dev set from BIBREF3. Our most effective decomposition model is a 12-block transformer encoder-decoder BIBREF5 trained using unsupervised sequence-to-sequence learning, involving masked language modeling, denoising, and back-translation objectives BIBREF6. Second, our method is competitive with state-of-the-art methods SAE BIBREF7 and HGN BIBREF8 which leverage strong supervision. Third, we show that our approach automatically learns to generate useful decompositions for all 4 question types in HotpotQA, highlighting the general nature of our approach. In our analysis, we explore how sub-questions improve multi-hop QA, and we provide qualitative examples that highlight how question decomposition adds a form of interpretability to black-box QA models. Our ablations show that each component of our pipeline contributes to QA performance. Overall, we find that it is possible to successfully decompose questions without any supervision and that doing so improves QA.
Method
We now formulate the problem and overview our high-level approach, with details in the following section. We aim to leverage a QA model that is accurate on simple questions to answer hard questions, without using supervised question decompositions. Here, we consider simple questions to be “single-hop” questions that require reasoning over one paragraph or piece of evidence, and we consider hard questions to be “multi-hop.” Our aim is then to train a multi-hop QA model $M$ to provide the correct answer $a$ to a multi-hop question $q$ about a given a context $c$ (e.g., several paragraphs). Normally, we would train $M$ to maximize $\log p_M(a | c, q)$. To help $M$, we leverage a single-hop QA model that may be queried with sub-questions $s_1, \dots , s_N$, whose “sub-answers” to each sub-question $a_1, \dots , a_N$ may be provided to the multi-hop QA model. $M$ may then instead maximize the (potentially easier) objective $\log p_M(a | c, q, [s_1, a_1], \dots , [a_N, s_N])$.
Supervised decomposition models learn to map each question $q \in Q$ to a decomposition $d = [s_1; \dots ; s_N]$ of $N$ sub-questions $s_n \in S$ using annotated $(q, d)$ examples. In this work, we do not assume access to strong $(q, d)$ supervision. To leverage the single-hop QA model without supervision, we follow a three-stage approach: 1) map a question $q$ into sub-questions $s_1, \dots , s_N$ via unsupervised techniques, 2) find sub-answers $a_1, \dots , a_N$ with the single-hop QA model, and 3) provide $s_1, \dots , s_N$ and $a_1, \dots , a_N$ to help predict $a$.
Method ::: Unsupervised Question Decomposition
To train a decomposition model, we need appropriate training data. We assume access to a hard question corpus $Q$ and a simple question corpus $S$. Instead of using supervised $(q, d)$ training examples, we design an algorithm that constructs pseudo-decompositions $d^{\prime }$ to form $(q, d^{\prime })$ pairs from $Q$ and $S$ using an unsupervised approach (§SECREF4). We then train a model to map $q$ to a decomposition. We explore learning to decompose with standard and unsupervised sequence-to-sequence learning (§SECREF6).
Method ::: Unsupervised Question Decomposition ::: Creating Pseudo-Decompositions
For each $q \in Q$, we construct a pseudo-decomposition set $d^{\prime } = \lbrace s_1; \dots ; s_N\rbrace $ by retrieving simple question $s$ from $S$. We concatenate all $N$ simple questions in $d^{\prime }$ to form the pseudo-decomposition used downstream. $N$ may be chosen based on the task or vary based on $q$. To retrieve useful simple questions for answering $q$, we face a joint optimization problem. We want sub-questions that are both (i) similar to $q$ according to some metric $f$ and (ii) maximally diverse:
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose
Having now retrieved relevant pseudo-decompositions, we examine different ways to learn to decompose (with implementation details in the following section):
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose ::: No Learning
We use pseudo-decompositions directly, employing retrieved sub-questions in downstream QA.
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose ::: Sequence-to-Sequence (Seq2Seq)
We train a Seq2Seq model with parameters $\theta $ to maximize $\log p_{\theta }(d^{\prime } | q)$.
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose ::: Unsupervised Sequence-to-Sequence (USeq2Seq)
We start with paired $(q, d^{\prime })$ examples but do not learn from the pairing, because the pairing is noisy. We use unsupervised sequence-to-sequence learning to learn a $q \rightarrow d$ mapping instead of training directly on the noisy pairing.
Method ::: Answering Sub-Questions
To answer the generated sub-questions, we use an off-the-shelf QA model. The QA model may answer sub-questions using any free-form text (i.e., a word, phrase, sentence, etc.). Any QA model is suitable, so long as it can accurately answer simple questions in $S$. We thus leverage good accuracy on questions in $S$ to help QA models on questions in $Q$.
Method ::: QA using Decompositions
Downstream QA systems may use sub-questions and sub-answers in various ways. We add sub-questions and sub-answers as auxiliary input for a downstream QA model to incorporate in its processing. We now describe the implementation details of our approach outlined above.
Experimental Setup ::: Question Answering Task
We test unsupervised decompositions on HotpotQA BIBREF0, a standard benchmark for multi-hop QA. We use HotpotQA's “Distractor Setting,” which provides 10 context paragraphs from Wikipedia. Two (or more) paragraphs contain question-relevant sentences called “supporting facts,” and the remaining paragraphs are irrelevant, “distractor paragraphs.” Answers in HotpotQA are either yes, no, or a span of text in an input paragraph. Accuracy is measured with F1 and Exact Match (EM) scores between the predicted and gold spans.
Experimental Setup ::: Unsupervised Decomposition ::: Question Data
We use HotpotQA questions as our initial multi-hop, hard question corpus $Q$. We use SQuAD 2 questions as our initial single-hop, simple question corpus $S$. However, our pseudo-decomposition corpus should be large, as the corpus will be used to train neural Seq2Seq models, which are data hungry. A larger $|S|$ will also improve the relevance of retrieved simple questions to the hard question. Thus, we take inspiration from work in machine translation on parallel corpus mining BIBREF9, BIBREF10 and in unsupervised QA BIBREF11. We augment $Q$ and $S$ by mining more questions from Common Crawl. We choose sentences which start with common “wh”-words and end with “?” Next, we train a FastText classifier BIBREF12 to classify between 60K questions sampled from Common Crawl, SQuAD 2, and HotpotQA. Then, we classify Common Crawl questions, adding questions classified as SQuAD 2 questions to $S$ and questions classified as HotpotQA questions to $Q$. Question mining greatly increases the number of single-hop questions (130K $\rightarrow $ 10.1M) and multi-hop questions (90K $\rightarrow $ 2.4M). Thus, our unsupervised approach allows us to make use of far more data than supervised counterparts.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions
To create pseudo-decompositions, we set the number $N$ of sub-questions per question to 2, as questions in HotpotQA usually involve two reasoning hops. In Appendix §SECREF52, we discuss how our method works when $N$ varies per question.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions ::: Similarity-based Retrieval
To retrieve question-relevant sub-questions, we embed any text $t$ into a vector $\mathbf {v}_t$ by summing the FastText vectors BIBREF13 for words in $t$. We use cosine similarity as our similarity metric $f$. Let $q$ be a multi-hop question used to retrieve pseudo-decomposition $(s_1^*, s_2^*)$, and let $\hat{\mathbf {v}}$ be the unit vector of $\mathbf {v}$. Since $N=2$, Eq. DISPLAY_FORM5 reduces to:
The last term requires $O(|S|^2)$ comparisons, which is expensive as $|S|$ is large ($>$10M). Instead of solving Eq. (DISPLAY_FORM19) exactly, we find an approximate pseudo-decomposition $(s_1^{\prime }, s_2^{\prime })$ by computing Eq. (DISPLAY_FORM19) over $S^{\prime } = \operatornamewithlimits{topK}_{\lbrace s \in S\rbrace }\left[ \mathbf {\hat{v}}_{q}^{\top } \mathbf {\hat{v}}_s\right]$, using $K=1000$. We use FAISS BIBREF14 to efficiently build $S^{\prime }$.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions ::: Random Retrieval
For comparison, we test random pseudo-decompositions, where we randomly retrieve $s_1, \dots , s_N$ by sampling from $S$. USeq2Seq trained on random $d^{\prime } = [s_1; \dots ; s_N]$ should at minimum learn to map $q$ to multiple simple questions.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions ::: Editing Pseudo-Decompositions
Since the sub-questions are retrieval-based, the sub-questions are often not about the same entities as $q$. As a post-processing step, we replace entities in $(s^{\prime }_1, s^{\prime }_2)$ with entities from $q$. We find all entities in $(s^{\prime }_1, s^{\prime }_2)$ that do not appear in $q$ using spaCy BIBREF15. We replace these entities with a random entity from $q$ with the same type (e.g., “Date” or “Location”) if and only if one exists. We use entity replacement on pseudo-decompositions from both random and similarity-based retrieval.
Experimental Setup ::: Unsupervised Decomposition ::: Unsupervised Decomposition Models ::: Pre-training
Pre-training is a key ingredient for unsupervised Seq2Seq methods BIBREF16, BIBREF17, so we initialize all decomposition models with the same pre-trained weights, regardless of training method (Seq2Seq or USeq2Seq). We warm-start our pre-training with the pre-trained, English Masked Language Model (MLM) from BIBREF6, a 12-block decoder-only transformer model BIBREF5 trained to predict masked-out words on Toronto Books Corpus BIBREF18 and Wikipedia. We train the model with the MLM objective for one epoch on the augmented corpus $Q$ (2.4 M questions), while also training on decompositions $D$ formed via random retrieval from $S$. For our pre-trained encoder-decoder, we initialize a 6-block encoder with the first 6 MLM blocks, and we initialize a 6-block decoder with the last 6 MLM blocks, randomly initializing the remaining weights as in BIBREF6.
Experimental Setup ::: Unsupervised Decomposition ::: Unsupervised Decomposition Models ::: Seq2Seq
We fine-tune the pre-trained encoder-decoder using maximum likelihood. We stop training based on validation BLEU BIBREF19 between generated decompositions and pseudo-decompositions.
Experimental Setup ::: Unsupervised Decomposition ::: Unsupervised Decomposition Models ::: USeq2Seq
We follow the approach by BIBREF6 in unsupervised translation. Training follows two stages: (1) MLM pre-training on the training corpora (described above), followed by (2) training simultaneously with denoising and back-translation objectives. For denoising, we produce a noisy input $\hat{d}$ by randomly masking, dropping, and locally shuffling tokens in $d \sim D$, and we train a model with parameters $\theta $ to maximize $\log p_{\theta }(d | \hat{d})$. We likewise maximize $\log p_{\theta }(q | \hat{q})$. For back-translation, we generate a multi-hop question $\hat{q}$ for a decomposition $d \sim D$, and we maximize $\log p_{\theta }(d | \hat{q})$. Similarly, we maximize $\log p_{\theta }(q | \hat{d})$. To stop training without supervision, we use a modified version of round-trip BLEU BIBREF17 (see Appendix §SECREF56 for details). We train with denoising and back-translation on smaller corpora of HotpotQA questions ($Q$) and their pseudo-decompositions ($D$).
Experimental Setup ::: Single-hop Question Answering Model
We train our single-hop QA model following prior work from BIBREF3 on HotpotQA.
Experimental Setup ::: Single-hop Question Answering Model ::: Model Architecture
We fine-tune a pre-trained model to take a question and several paragraphs and predicts the answer, similar to the single-hop QA model from BIBREF21. The model computes a separate forward pass on each paragraph (with the question). For each paragraph, the model learns to predict the answer span if the paragraph contains the answer and to predict “no answer” otherwise. We treat yes and no predictions as spans within the passage (prepended to each paragraph), as in BIBREF22 on HotpotQA. During inference, for the final softmax, we consider all paragraphs as a single chunk. Similar to BIBREF23, we subtract a paragraph's “no answer” logit from the logits of all spans in that paragraph, to reduce or increase span probabilities accordingly. In other words, we compute the probability $p(s_p)$ of each span $s_p$ in a paragraph $p \in \lbrace 1, \dots , P \rbrace $ using the predicted span logit $l(s_p)$ and “no answer” paragraph logit $n(p)$ as follows:
We use $\textsc {RoBERTa}_{\textsc {LARGE}}$ BIBREF24 as our pre-trained initialization. Later, we also experiment with using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3.
Experimental Setup ::: Single-hop Question Answering Model ::: Training Data and Ensembling
Similar to BIBREF3, we train an ensemble of 2 single-hop QA models using data from SQuAD 2 and HotpotQA questions labeled as “easy” (single-hop). To ensemble, we average the logits of the two models before predicting the answer. SQuAD is a single-paragraph QA task, so we adapt SQuAD to the multi-paragraph setting by retrieving distractor paragraphs from Wikipedia for each question. We use the TFIDF retriever from DrQA BIBREF25 to retrieve 2 distractor paragraphs, which we add to the input for one model in the ensemble. We drop words from the question with a 5% probability to help the model handle any ill-formed sub-questions. We use the single-hop QA ensemble as a black-box model once trained, never training the model on multi-hop questions.
Experimental Setup ::: Single-hop Question Answering Model ::: Returned Text
We have the single-hop QA model return the sentence containing the model's predicted answer span, alongside the sub-questions. Later, we compare against alternatives, i.e., returning the predicted answer span without its context or not returning sub-questions.
Experimental Setup ::: Multi-hop Question Answering Model
Our multi-hop QA architecture is identical to the single-hop QA model, but the multi-hop QA model also uses sub-questions and sub-answers as input. We append each (sub-question, sub-answer) pair in order to the multi-hop question along with separator tokens. We train one multi-hop QA model on all of HotpotQA, also including SQuAD 2 examples used to train the single-hop QA model. Later, we experiment with using $\textsc {BERT}_{\textsc {LARGE}}$ and $\textsc {BERT}_{\textsc {BASE}}$ instead of $\textsc {RoBERTa}_{\textsc {LARGE}}$ as the multi-hop QA model. All reported error margins show the mean and std. dev. across 5 multi-hop QA training runs using the same decompositions.
Results on Question Answering
We compare variants of our approach that use different learning methods and different pseudo-aligned training sets. As a baseline, we compare RoBERTa with decompositions to a RoBERTa model that does not use decompositions but is identical in all other respects. We train the baseline for 2 epochs, sweeping over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $; we choose the hyperparameters that perform best on our dev set. We then use the best hyperparameters for the baseline to train our RoBERTa models with decompositions.
We report results on 3 versions of the dev set: (1) the original version, (2) the multi-hop version from BIBREF4 which created some distractor paragraphs adversarially to test multi-hop reasoning, and (3) the out-of-domain version from BIBREF3 which retrieved distractor paragraphs using the same procedure as the original version, but excluded paragraphs in the original version.
Results on Question Answering ::: Main Results
Table shows how unsupervised decompositions affect QA. Our RoBERTa baseline performs quite well on HotpotQA (77.0 F1), despite processing each paragraph separately, which prohibits inter-paragraph reasoning. The result is in line with prior work which found that a version of our baseline QA model using BERT BIBREF26 does well on HotpotQA by exploiting single-hop reasoning shortcuts BIBREF21. We achieve significant gains over our strong baseline by leveraging decompositions from our best decomposition model, trained with USeq2Seq on FastText pseudo-decompositions; we find a 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, and 10 F1 gain on the out-of-domain dev set. Unsupervised decompositions even match the performance of using (within our pipeline) supervised and heuristic decompositions from DecompRC (i.e., 80.1 vs. 79.8 F1 on the original dev set).
More generally, all decomposition methods improve QA over the baseline by leveraging the single-hop QA model (“1hop” in Table ). Using FastText pseudo-decompositions as sub-questions directly improves QA over using random sub-questions on the multi-hop set (72.4 vs. 70.9 F1) and out-of-domain set (72.0 vs. 70.7 F1). USeq2Seq on random pseudo-decompositions also improves over the random sub-question baseline (e.g., 79.8 vs. 78.4 F1 on HotpotQA). However, we only find small improvements when training USeq2Seq on FastText vs. Random pseudo-decompositions (e.g., 77.1 vs. 76.5 F1 on the out-of-domain dev set).
The best decomposition methods learn with USeq2Seq. Using Seq2Seq to generate decompositions gives similar QA accuracy as the “No Learning” setup, e.g. both approaches achieve 78.9 F1 on the original dev set for FastText pseudo-decompositions. The results are similar perhaps since supervised learning is directly trained to place high probability on pseudo-decompositions. USeq2Seq may improve over Seq2Seq by learning to align hard questions and pseudo-decompositions while ignoring the noisy pairing.
After our experimentation, we chose USeq2Seq trained on FastText pseudo-decompositions as the final model, and we submitted the model for hidden test evaluation. Our approach achieved a test F1 of 79.34 and Exact Match (EM) of 66.33. Our approach is competitive with concurrent, state-of-the-art systems SAE BIBREF7 and HGN BIBREF8, which both (unlike our approach) learn from additional, strong supervision about which sentences are necessary to answer the question.
Results on Question Answering ::: Question Type Breakdown
To understand where decompositions help, we break down QA performance across 4 question types from BIBREF3. “Bridge” questions ask about an entity not explicitly mentioned in the question (“When was Erik Watts' father born?”). “Intersection” questions ask to find an entity that satisfies multiple separate conditions (“Who was on CNBC and Fox News?”). “Comparison” questions ask to compare a property of two entities (“Which is taller, Momhil Sar or K2?”). “Single-hop” questions are likely answerable using single-hop shortcuts or single-paragraph reasoning (“Where is Electric Six from?”). We split the original dev set into the 4 types using the supervised type classifier from BIBREF3. Table shows F1 scores for RoBERTa with and without decompositions across the 4 types.
Unsupervised decompositions improve QA across all question types. Our single decomposition model generates useful sub-questions for all question types without special case handling, unlike earlier work from BIBREF3 which handled each question type separately. For single-hop questions, our QA approach does not require falling back to a single-hop QA model and instead learns to leverage decompositions to better answer questions with single-hop shortcuts (76.9 vs. 73.9 F1 without decompositions).
Results on Question Answering ::: Answers to Sub-Questions are Crucial
To measure the usefulness of sub-questions and sub-answers, we train the multi-hop QA model with various, ablated inputs, as shown in Table . Sub-answers are crucial to improving QA, as sub-questions with no answers or random answers do not help (76.9 vs. 77.0 F1 for the baseline). Only when sub-answers are provided do we see improved QA, with or without sub-questions (80.1 and 80.2 F1, respectively). It is important to provide the sentence containing the predicted answer span instead of the answer span alone (80.1 vs. 77.8 F1, respectively), though the answer span alone still improves over the baseline (77.0 F1).
Results on Question Answering ::: How Do Decompositions Help?
Decompositions help to answer questions by retrieving important supporting evidence to answer questions. Fig. FIGREF41 shows that multi-hop QA accuracy increases when the sub-answer sentences are the “supporting facts” or sentences needed to answer the question, as annotated by HotpotQA. We retrieve supporting facts without learning to predict them with strong supervision, unlike many state-of-the-art models BIBREF7, BIBREF8, BIBREF22.
Results on Question Answering ::: Example Decompositions
To illustrate how decompositions help QA, Table shows example sub-questions from our best decomposition model with predicted sub-answers. Sub-questions are single-hop questions relevant to the multi-hop question. The single-hop QA model returns relevant sub-answers, sometimes in spite of grammatical errors (Q1, SQ$_1$) or under-specified questions (Q2, SQ$_1$). The multi-hop QA model then returns an answer consistent with the predicted sub-answers. The decomposition model is largely extractive, copying from the multi-hop question rather than hallucinating new entities, which helps generate relevant sub-questions. To better understand our system, we analyze the model for each stage: decomposition, single-hop QA, and multi-hop QA.
Analysis ::: Unsupervised Decomposition Model ::: Intrinsic Evaluation of Decompositions
We evaluate the quality of decompositions on other metrics aside from downstream QA. To measure the fluency of decompositions, we compute the likelihood of decompositions using the pre-trained GPT-2 language model BIBREF27. We train a classifier on the question-wellformedness dataset of BIBREF28, and we use the classifier to estimate the proportion of sub-questions that are well-formed. We measure how abstractive decompositions are by computing (i) the token Levenstein distance between the multi-hop question and its generated decomposition and (ii) the ratio between the length of the decomposition and the length of the multi-hop question. We compare our best decomposition model against the supervised+heuristic decompositions from DecompRC BIBREF3 in Table .
Unsupervised decompositions are both more natural and well-formed than decompositions from DecompRC. Unsupervised decompositions are also closer in edit distance and length to the multi-hop question, consistent with our observation that our decomposition model is largely extractive.
Analysis ::: Unsupervised Decomposition Model ::: Quality of Decomposition Model
Another way to test the quality of the decomposition model is to test if the model places higher probability on decompositions that are more helpful for downstream QA. We generate $N=5$ hypotheses from our best decomposition model using beam search, and we train a multi-hop QA model to use the $n$th-ranked hypothesis as a question decomposition (Fig. FIGREF46, left). QA accuracy decreases as we use lower probability decompositions, but accuracy remains relatively robust, at most decreasing from 80.1 to 79.3 F1. The limited drop suggests that decompositions are still useful if they are among the model's top hypotheses, another indication that our model is trained well for decomposition.
Analysis ::: Single-hop Question Answering Model ::: Sub-Answer Confidence
Figure FIGREF46 (right) shows that the model's sub-answer confidence correlates with downstream multi-hop QA performance for all HotpotQA dev sets. A low confidence sub-answer may be indicative of (i) an unanswerable or ill-formed sub-question or (ii) a sub-answer that is more likely to be incorrect. In both cases, the single-hop QA model is less likely to retrieve the useful supporting evidence to answer the multi-hop question.
Analysis ::: Single-hop Question Answering Model ::: Changing the Single-hop QA Model
We find that our approach is robust to the single-hop QA model that answers sub-questions. We use the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 as the single-hop QA model. The model performs much worse compared to our $\textsc {RoBERTa}_{\textsc {LARGE}}$ single-hop ensemble when used directly on HotpotQA (56.3 vs. 66.7 F1). However, the model results in comparable QA when used to answer single-hop sub-questions within our larger system (79.9 vs. 80.1 F1 for our $\textsc {RoBERTa}_{\textsc {LARGE}}$ ensemble).
Analysis ::: Multi-hop Question Answering Model ::: Varying the Base Model
To understand how decompositions impact performance as the multi-hop QA model gets stronger, we vary the base pre-trained model. Table shows the impact of adding decompositions to $\textsc {BERT}_{\textsc {BASE}}$ , $\textsc {BERT}_{\textsc {LARGE}}$ , and finally $\textsc {RoBERTa}_{\textsc {LARGE}}$ (see Appendix §SECREF64 for hyperparameters). The gain from using decompositions grows with strength of the multi-hop QA model. Decompositions improve QA by 1.2 F1 for a $\textsc {BERT}_{\textsc {BASE}}$ model, by 2.6 F1 for the stronger $\textsc {BERT}_{\textsc {LARGE}}$ model, and by 3.1 F1 for our best $\textsc {RoBERTa}_{\textsc {LARGE}}$ model.
Related Work
Answering complicated questions has been a long-standing challenge in natural language processing. To this end, prior work has explored decomposing questions with supervision or heuristic algorithms. IBM Watson BIBREF29 decomposes questions into sub-questions in multiple ways or not at all. DecompRC BIBREF3 largely frames sub-questions as extractive spans of a multi-hop question, learning to predict span-based sub-questions via supervised learning on human annotations. In other cases, DecompRC decomposes a multi-hop question using a heuristic algorithm, or DecompRC does not decompose at all. Watson and DecompRC use special case handling to decompose different questions, while our algorithm is fully automated and requires minimal hand-engineering.
More traditional, semantic parsing methods map questions to compositional programs, whose sub-programs can be viewed as question decompositions in a formal language BIBREF2, BIBREF30. Examples include classical QA systems like SHRDLU BIBREF31 and LUNAR BIBREF32, as well as neural Seq2Seq semantic parsers BIBREF33 and neural module networks BIBREF34, BIBREF35. Such methods usually require strong, program-level supervision to generate programs, as in visual QA BIBREF36 and on HotpotQA BIBREF37. Some models use other forms of strong supervision, e.g. predicting the “supporting evidence” to answer a question annotated by HotpotQA. Such an approach is taken by SAE BIBREF7 and HGN BIBREF8, whose methods may be combined with our approach.
Unsupervised decomposition complements strongly and weakly supervised decomposition approaches. Our unsupervised approach enables methods to leverage millions of otherwise unusable questions, similar to work on unsupervised QA BIBREF11. When decomposition examples exist, supervised and unsupervised learning can be used in tandem to learn from both labeled and unlabeled examples. Such semi-supervised methods outperform supervised learning for tasks like machine translation BIBREF38. Other work on weakly supervised question generation uses a downstream QA model's accuracy as a signal for learning to generate useful questions. Weakly supervised question generation often uses reinforcement learning BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, where an unsupervised initialization can greatly mitigate the issues of exploring from scratch BIBREF44.
Conclusion
We proposed an algorithm that decomposes questions without supervision, using 3 stages: (1) learning to decompose using pseudo-decompositions without supervision, (2) answering sub-questions with an off-the-shelf QA system, and (3) answering hard questions more accurately using sub-questions and their answers as additional input. When evaluated on HotpotQA, a standard benchmark for multi-hop QA, our approach significantly improved accuracy over an equivalent model that did not use decompositions. Our approach relies only on the final answer as supervision but works as effectively as state-of-the-art methods that rely on strong supervision, such as supporting fact labels or example decompositions. Qualitatively, we found that unsupervised decomposition resulted in fluent sub-questions whose answers often match the annotated supporting facts in HotpotQA. Our unsupervised decompositions are largely extractive, which is effective for compositional, multi-hop questions but not all complex questions, showing room for future work. Overall, this work opens up exciting avenues for leveraging methods in unsupervised learning and natural language generation to improve the interpretability and generalization of machine learning systems.
Acknowledgements
EP is supported by the NSF Graduate Research Fellowship. KC is supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). KC also thanks eBay and NVIDIA for their support. We thank Paul Christiano, Sebastian Riedel, He He, Jonathan Berant, Alexis Conneau, Jiatao Gu, Sewon Min, Yixin Nie, Lajanugen Logeswaran, and Adam Fisch for helpful feedback, as well as Yichen Jiang and Peng Qi for help with evaluation.
Pseudo-Decompositions
Tables - show examples of pseudo-decompositions and learned decompositions from various models.
Pseudo-Decompositions ::: Variable Length Pseudo-Decompositions
In §SECREF15, we leveraged domain knowledge about the task to fix the pseudo-decomposition length $N=2$. A general algorithm for creating pseudo-decompositions should find a suitable $N$ for each question. We find that Eq. DISPLAY_FORM5 in SECREF4 always results in decompositions of length $N=2$, as the regularization term grows quickly with $N$. Thus, we test another formulation based on Euclidean distance:
We create pseudo-decompositions in an similar way as before, first finding a set of candidate sub-questions $S^{\prime } \in S$ with high cosine similarity to $\mathbf {v}_q$, then performing beam search up to a maximum value of $N$. We test pseudo-decomposition formulations by creating synthetic compositional questions by combining 2-3 single-hop questions with “and.” We then measure the ranking of the correct decomposition (a concatenation of the single-hop questions). For $N=2$, both methods perform well, but Eq. DISPLAY_FORM5 does not work for decompositions where $N=3$, whereas Eq. DISPLAY_FORM53 does, achieving a mean reciprocal rank of 30%. However, Eq. DISPLAY_FORM5 outperforms Eq. DISPLAY_FORM53 on HotpotQA, e.g., achieving 79.9 vs. 79.4 F1 when using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 to answer sub-questions. Eq. DISPLAY_FORM5 is also faster to compute and easier to scale. Moreover, Eq. DISPLAY_FORM53 requires an embedding space where summing sub-question representations is meaningful, whereas Eq. DISPLAY_FORM5 only requires embeddings that encode semantic similarity. Thus, we adopt Eq. DISPLAY_FORM5 for our main experiments. Table contains an example where the variable length decomposition method mentioned above produces a three-subquestion decomposition whereas the other methods are fixed to two subquestions.
Pseudo-Decompositions ::: Impact of Question Corpus Size
In addition to our previous results on FastText vs. Random pseudo-decompositions, we found it important to use a large question corpus to create pseudo-decompositions. QA F1 increased from 79.2 to 80.1 when we trained decomposition models on pseudo-decompositions comprised of questions retrieved from Common Crawl ($>$10M questions) rather than only SQuAD 2 ($\sim $130K questions), using an appropriately larger beam size (100 $\rightarrow $ 1000).
Pseudo-Decompositions ::: Pseudo-Decomposition Retrieval Method
Table shows QA results with pseudo-decompositions retrieved using sum-bag-of-word representations from FastText, TFIDF, $\textsc {BERT}_{\textsc {LARGE}}$ first layer hidden states. We also vary the learning method and include results Curriculum Seq2Seq (CSeq2Seq), where we initialize the USeq2Seq approach with the Seq2Seq model trained on the same data.
Unsupervised Decomposition Model ::: Unsupervised Stopping Criterion
To stop USeq2Seq training, we use an unsupervised stopping criterion to avoid relying on a supervised validation set of decompositions. We generate a decomposition $\hat{d}$ for a multi-hop question $q$, and we measure BLEU between $q$ and the model-generated question $\hat{q}$ for $\hat{d}$, similar to round-trip BLEU in unsupervised translation BIBREF17. We scale round-trip BLEU score by the fraction of “good” decompositions, where a good decomposition has (1) 2 sub-questions (question marks), (2) no sub-question which contains all words in the multi-hop question, and (3) no sub-question longer than the multi-hop question. Without scaling, decomposition models achieve perfect round-trip BLEU by copying the multi-hop question as the decomposition. We measure scaled BLEU across multi-hop questions in HotpotQA dev, and we stop training when the metric does not increase for 3 consecutive epochs.
It is possible to stop training the decomposition model based on downstream QA accuracy. However, training a QA model on each decomposition model checkpoint (1) is computationally expensive and (2) ties decompositions to a specific, downstream QA model. In Figure FIGREF57, we show downstream QA results across various USeq2Seq checkpoints when using the $\textsc {BERT}_{\textsc {BASE}}$ single-hop QA ensemble from BIBREF3. The unsupervised stopping criterion does not significantly hurt downstream QA compared to using a weakly-supervised stopping criterion.
Unsupervised Decomposition Model ::: Training Hyperparameters ::: MLM Pre-training
We pre-train our encoder-decoder distributed across 8 DGX-1 machines, each with 8, 32GB NVIDIA V100 GPUs interconnected by Infiniband. We pre-train using the largest possible batch size (1536), and we choose the best learning rate ($3 \times 10^{-5}$) based on training loss after a small number of iterations. We chose a maximum sequence length of 128. We keep other hyperparameters identical to those from BIBREF6 used in unsupervised translation.
Unsupervised Decomposition Model ::: Training Hyperparameters ::: USeq2Seq
We train each decomposition model with distributed training across 8, 32GB NVIDIA V100 GPUs. We chose the largest possible batch size (256) and then the largest learning rate which resulted in stable training ($3 \times 10^{-5}$). Other hyperparameters are the same as BIBREF6.
Unsupervised Decomposition Model ::: Training Hyperparameters ::: Seq2Seq
We use a large batch size (1024) and chose the largest learning rate which resulted in stable training across many pseudo-decomposition training corpora ($1 \times 10^{-4}$). We keep other training settings and hyperparameters the same as for USeq2Seq.
Multi-hop QA Model ::: Varying the Number of Training Examples
To understand how decompositions impact performance given different amounts of training data, we vary the number of multi-hop training examples. We use the “medium” and “hard” level labels in HotpotQA to determine which examples are multi-hop. We consider training setups where the multi-hop QA model does or does not use data augmentation via training on hotpot “easy”/single-hop questions and SQuAD 2 questions. Fig. FIGREF63 shows the results. Decompositions improve QA, so long as the multi-hop QA model has enough training data, either via single-hop QA examples or enough multi-hop QA examples.
Multi-hop QA Model ::: Training Hyperparameters
To train $\textsc {RoBERTa}_{\textsc {LARGE}}$ , we fix the number of training epochs to 2, as training longer did not help. We sweep over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $, similar to the ranges used in the original paper BIBREF24. We chose the hyperparameters that did best for the baseline QA model (without decompositions) on our validation set: batch size 64, learning rate $1.5 \times 10^{-5}$, and weight decay $0.01$. Similarly, for the experiments with BERT, we fix the number of epochs to 2 and choose hyperparameters by sweeping over the recommended ranges from BIBREF26 for learning rate ($\lbrace 2 \times 10^{-5}, 3 \times 10^{-5}, 5 \times 10^{-5}\rbrace $) and batch size ($\lbrace 16, 32\rbrace $). For $\textsc {BERT}_{\textsc {BASE}}$ , we thus choose learning rate $2 \times 10^{-5}$ and batch size 16, and for $\textsc {BERT}_{\textsc {LARGE}}$ , we use the whole-word masking model with learning rate $2 \times 10^{-5}$ and batch size 32. We train all QA models with mixed precision floating point arithmetic BIBREF45, distributing training across 8, 32GB NVIDIA V100 GPUs.
Multi-hop QA Model ::: Improvements across Detailed Question Types
To better understand where decompositions improve QA, we show the improvement across various fine-grained splits of the evaluation sets in Figures FIGREF66-FIGREF70. | 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, 10 F1 gain on the out-of-domain dev set. |
af1439c68b28c27848203f863675946380d28943 | af1439c68b28c27848203f863675946380d28943_0 | Q: What is the strong baseline that this work outperforms?
Text: Introduction
Question answering (QA) systems have become remarkably good at answering simple, single-hop questions but still struggle with compositional, multi-hop questions BIBREF0, BIBREF1. In this work, we examine if we can answer hard questions by leveraging our ability to answer simple questions. Specifically, we approach QA by breaking a hard question into a series of sub-questions that can be answered by a simple, single-hop QA system. The system's answers can then be given as input to a downstream QA system to answer the hard question, as shown in Fig. FIGREF1. Our approach thus answers the hard question in multiple, smaller steps, which can be easier than answering the hard question all at once. For example, it may be easier to answer “What profession do H. L. Mencken and Albert Camus have in common?” when given the answers to the sub-questions “What profession does H. L. Mencken have?” and “Who was Albert Camus?”
Prior work in learning to decompose questions into sub-questions has relied on extractive heuristics, which generalizes poorly to different domains and question types, and requires human annotation BIBREF2, BIBREF3. In order to scale to any arbitrary question, we would require sophisticated natural language generation capabilities, which often relies on large quantities of high-quality supervised data. Instead, we find that it is possible to learn to decompose questions without supervision.
Specifically, we learn to map from the distribution of hard questions to the distribution of simpler questions. First, we automatically construct a noisy, “pseudo-decomposition” for each hard question by retrieving relevant sub-question candidates based on their similarity to the given hard question. We retrieve candidates from a corpus of 10M simple questions that we extracted from Common Crawl. Second, we train neural text generation models on that data with (1) standard sequence-to-sequence learning and (2) unsupervised sequence-to-sequence learning. The latter has the advantage that it can go beyond the noisy pairing between questions and pseudo-decompositions. Fig. FIGREF2 overviews our decomposition approach.
We use decompositions to improve multi-hop QA. We first use an off-the-shelf single-hop QA model to answer decomposed sub-questions. We then give each sub-question and its answer as additional input to a multi-hop QA model. We test our method on HotpotQA BIBREF0, a popular multi-hop QA benchmark.
Our contributions are as follows. First, QA models relying on decompositions improve accuracy over a strong baseline by 3.1 F1 on the original dev set, 11 F1 on the multi-hop dev set from BIBREF4, and 10 F1 on the out-of-domain dev set from BIBREF3. Our most effective decomposition model is a 12-block transformer encoder-decoder BIBREF5 trained using unsupervised sequence-to-sequence learning, involving masked language modeling, denoising, and back-translation objectives BIBREF6. Second, our method is competitive with state-of-the-art methods SAE BIBREF7 and HGN BIBREF8 which leverage strong supervision. Third, we show that our approach automatically learns to generate useful decompositions for all 4 question types in HotpotQA, highlighting the general nature of our approach. In our analysis, we explore how sub-questions improve multi-hop QA, and we provide qualitative examples that highlight how question decomposition adds a form of interpretability to black-box QA models. Our ablations show that each component of our pipeline contributes to QA performance. Overall, we find that it is possible to successfully decompose questions without any supervision and that doing so improves QA.
Method
We now formulate the problem and overview our high-level approach, with details in the following section. We aim to leverage a QA model that is accurate on simple questions to answer hard questions, without using supervised question decompositions. Here, we consider simple questions to be “single-hop” questions that require reasoning over one paragraph or piece of evidence, and we consider hard questions to be “multi-hop.” Our aim is then to train a multi-hop QA model $M$ to provide the correct answer $a$ to a multi-hop question $q$ about a given a context $c$ (e.g., several paragraphs). Normally, we would train $M$ to maximize $\log p_M(a | c, q)$. To help $M$, we leverage a single-hop QA model that may be queried with sub-questions $s_1, \dots , s_N$, whose “sub-answers” to each sub-question $a_1, \dots , a_N$ may be provided to the multi-hop QA model. $M$ may then instead maximize the (potentially easier) objective $\log p_M(a | c, q, [s_1, a_1], \dots , [a_N, s_N])$.
Supervised decomposition models learn to map each question $q \in Q$ to a decomposition $d = [s_1; \dots ; s_N]$ of $N$ sub-questions $s_n \in S$ using annotated $(q, d)$ examples. In this work, we do not assume access to strong $(q, d)$ supervision. To leverage the single-hop QA model without supervision, we follow a three-stage approach: 1) map a question $q$ into sub-questions $s_1, \dots , s_N$ via unsupervised techniques, 2) find sub-answers $a_1, \dots , a_N$ with the single-hop QA model, and 3) provide $s_1, \dots , s_N$ and $a_1, \dots , a_N$ to help predict $a$.
Method ::: Unsupervised Question Decomposition
To train a decomposition model, we need appropriate training data. We assume access to a hard question corpus $Q$ and a simple question corpus $S$. Instead of using supervised $(q, d)$ training examples, we design an algorithm that constructs pseudo-decompositions $d^{\prime }$ to form $(q, d^{\prime })$ pairs from $Q$ and $S$ using an unsupervised approach (§SECREF4). We then train a model to map $q$ to a decomposition. We explore learning to decompose with standard and unsupervised sequence-to-sequence learning (§SECREF6).
Method ::: Unsupervised Question Decomposition ::: Creating Pseudo-Decompositions
For each $q \in Q$, we construct a pseudo-decomposition set $d^{\prime } = \lbrace s_1; \dots ; s_N\rbrace $ by retrieving simple question $s$ from $S$. We concatenate all $N$ simple questions in $d^{\prime }$ to form the pseudo-decomposition used downstream. $N$ may be chosen based on the task or vary based on $q$. To retrieve useful simple questions for answering $q$, we face a joint optimization problem. We want sub-questions that are both (i) similar to $q$ according to some metric $f$ and (ii) maximally diverse:
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose
Having now retrieved relevant pseudo-decompositions, we examine different ways to learn to decompose (with implementation details in the following section):
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose ::: No Learning
We use pseudo-decompositions directly, employing retrieved sub-questions in downstream QA.
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose ::: Sequence-to-Sequence (Seq2Seq)
We train a Seq2Seq model with parameters $\theta $ to maximize $\log p_{\theta }(d^{\prime } | q)$.
Method ::: Unsupervised Question Decomposition ::: Learning to Decompose ::: Unsupervised Sequence-to-Sequence (USeq2Seq)
We start with paired $(q, d^{\prime })$ examples but do not learn from the pairing, because the pairing is noisy. We use unsupervised sequence-to-sequence learning to learn a $q \rightarrow d$ mapping instead of training directly on the noisy pairing.
Method ::: Answering Sub-Questions
To answer the generated sub-questions, we use an off-the-shelf QA model. The QA model may answer sub-questions using any free-form text (i.e., a word, phrase, sentence, etc.). Any QA model is suitable, so long as it can accurately answer simple questions in $S$. We thus leverage good accuracy on questions in $S$ to help QA models on questions in $Q$.
Method ::: QA using Decompositions
Downstream QA systems may use sub-questions and sub-answers in various ways. We add sub-questions and sub-answers as auxiliary input for a downstream QA model to incorporate in its processing. We now describe the implementation details of our approach outlined above.
Experimental Setup ::: Question Answering Task
We test unsupervised decompositions on HotpotQA BIBREF0, a standard benchmark for multi-hop QA. We use HotpotQA's “Distractor Setting,” which provides 10 context paragraphs from Wikipedia. Two (or more) paragraphs contain question-relevant sentences called “supporting facts,” and the remaining paragraphs are irrelevant, “distractor paragraphs.” Answers in HotpotQA are either yes, no, or a span of text in an input paragraph. Accuracy is measured with F1 and Exact Match (EM) scores between the predicted and gold spans.
Experimental Setup ::: Unsupervised Decomposition ::: Question Data
We use HotpotQA questions as our initial multi-hop, hard question corpus $Q$. We use SQuAD 2 questions as our initial single-hop, simple question corpus $S$. However, our pseudo-decomposition corpus should be large, as the corpus will be used to train neural Seq2Seq models, which are data hungry. A larger $|S|$ will also improve the relevance of retrieved simple questions to the hard question. Thus, we take inspiration from work in machine translation on parallel corpus mining BIBREF9, BIBREF10 and in unsupervised QA BIBREF11. We augment $Q$ and $S$ by mining more questions from Common Crawl. We choose sentences which start with common “wh”-words and end with “?” Next, we train a FastText classifier BIBREF12 to classify between 60K questions sampled from Common Crawl, SQuAD 2, and HotpotQA. Then, we classify Common Crawl questions, adding questions classified as SQuAD 2 questions to $S$ and questions classified as HotpotQA questions to $Q$. Question mining greatly increases the number of single-hop questions (130K $\rightarrow $ 10.1M) and multi-hop questions (90K $\rightarrow $ 2.4M). Thus, our unsupervised approach allows us to make use of far more data than supervised counterparts.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions
To create pseudo-decompositions, we set the number $N$ of sub-questions per question to 2, as questions in HotpotQA usually involve two reasoning hops. In Appendix §SECREF52, we discuss how our method works when $N$ varies per question.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions ::: Similarity-based Retrieval
To retrieve question-relevant sub-questions, we embed any text $t$ into a vector $\mathbf {v}_t$ by summing the FastText vectors BIBREF13 for words in $t$. We use cosine similarity as our similarity metric $f$. Let $q$ be a multi-hop question used to retrieve pseudo-decomposition $(s_1^*, s_2^*)$, and let $\hat{\mathbf {v}}$ be the unit vector of $\mathbf {v}$. Since $N=2$, Eq. DISPLAY_FORM5 reduces to:
The last term requires $O(|S|^2)$ comparisons, which is expensive as $|S|$ is large ($>$10M). Instead of solving Eq. (DISPLAY_FORM19) exactly, we find an approximate pseudo-decomposition $(s_1^{\prime }, s_2^{\prime })$ by computing Eq. (DISPLAY_FORM19) over $S^{\prime } = \operatornamewithlimits{topK}_{\lbrace s \in S\rbrace }\left[ \mathbf {\hat{v}}_{q}^{\top } \mathbf {\hat{v}}_s\right]$, using $K=1000$. We use FAISS BIBREF14 to efficiently build $S^{\prime }$.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions ::: Random Retrieval
For comparison, we test random pseudo-decompositions, where we randomly retrieve $s_1, \dots , s_N$ by sampling from $S$. USeq2Seq trained on random $d^{\prime } = [s_1; \dots ; s_N]$ should at minimum learn to map $q$ to multiple simple questions.
Experimental Setup ::: Unsupervised Decomposition ::: Creating Pseudo-Decompositions ::: Editing Pseudo-Decompositions
Since the sub-questions are retrieval-based, the sub-questions are often not about the same entities as $q$. As a post-processing step, we replace entities in $(s^{\prime }_1, s^{\prime }_2)$ with entities from $q$. We find all entities in $(s^{\prime }_1, s^{\prime }_2)$ that do not appear in $q$ using spaCy BIBREF15. We replace these entities with a random entity from $q$ with the same type (e.g., “Date” or “Location”) if and only if one exists. We use entity replacement on pseudo-decompositions from both random and similarity-based retrieval.
Experimental Setup ::: Unsupervised Decomposition ::: Unsupervised Decomposition Models ::: Pre-training
Pre-training is a key ingredient for unsupervised Seq2Seq methods BIBREF16, BIBREF17, so we initialize all decomposition models with the same pre-trained weights, regardless of training method (Seq2Seq or USeq2Seq). We warm-start our pre-training with the pre-trained, English Masked Language Model (MLM) from BIBREF6, a 12-block decoder-only transformer model BIBREF5 trained to predict masked-out words on Toronto Books Corpus BIBREF18 and Wikipedia. We train the model with the MLM objective for one epoch on the augmented corpus $Q$ (2.4 M questions), while also training on decompositions $D$ formed via random retrieval from $S$. For our pre-trained encoder-decoder, we initialize a 6-block encoder with the first 6 MLM blocks, and we initialize a 6-block decoder with the last 6 MLM blocks, randomly initializing the remaining weights as in BIBREF6.
Experimental Setup ::: Unsupervised Decomposition ::: Unsupervised Decomposition Models ::: Seq2Seq
We fine-tune the pre-trained encoder-decoder using maximum likelihood. We stop training based on validation BLEU BIBREF19 between generated decompositions and pseudo-decompositions.
Experimental Setup ::: Unsupervised Decomposition ::: Unsupervised Decomposition Models ::: USeq2Seq
We follow the approach by BIBREF6 in unsupervised translation. Training follows two stages: (1) MLM pre-training on the training corpora (described above), followed by (2) training simultaneously with denoising and back-translation objectives. For denoising, we produce a noisy input $\hat{d}$ by randomly masking, dropping, and locally shuffling tokens in $d \sim D$, and we train a model with parameters $\theta $ to maximize $\log p_{\theta }(d | \hat{d})$. We likewise maximize $\log p_{\theta }(q | \hat{q})$. For back-translation, we generate a multi-hop question $\hat{q}$ for a decomposition $d \sim D$, and we maximize $\log p_{\theta }(d | \hat{q})$. Similarly, we maximize $\log p_{\theta }(q | \hat{d})$. To stop training without supervision, we use a modified version of round-trip BLEU BIBREF17 (see Appendix §SECREF56 for details). We train with denoising and back-translation on smaller corpora of HotpotQA questions ($Q$) and their pseudo-decompositions ($D$).
Experimental Setup ::: Single-hop Question Answering Model
We train our single-hop QA model following prior work from BIBREF3 on HotpotQA.
Experimental Setup ::: Single-hop Question Answering Model ::: Model Architecture
We fine-tune a pre-trained model to take a question and several paragraphs and predicts the answer, similar to the single-hop QA model from BIBREF21. The model computes a separate forward pass on each paragraph (with the question). For each paragraph, the model learns to predict the answer span if the paragraph contains the answer and to predict “no answer” otherwise. We treat yes and no predictions as spans within the passage (prepended to each paragraph), as in BIBREF22 on HotpotQA. During inference, for the final softmax, we consider all paragraphs as a single chunk. Similar to BIBREF23, we subtract a paragraph's “no answer” logit from the logits of all spans in that paragraph, to reduce or increase span probabilities accordingly. In other words, we compute the probability $p(s_p)$ of each span $s_p$ in a paragraph $p \in \lbrace 1, \dots , P \rbrace $ using the predicted span logit $l(s_p)$ and “no answer” paragraph logit $n(p)$ as follows:
We use $\textsc {RoBERTa}_{\textsc {LARGE}}$ BIBREF24 as our pre-trained initialization. Later, we also experiment with using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3.
Experimental Setup ::: Single-hop Question Answering Model ::: Training Data and Ensembling
Similar to BIBREF3, we train an ensemble of 2 single-hop QA models using data from SQuAD 2 and HotpotQA questions labeled as “easy” (single-hop). To ensemble, we average the logits of the two models before predicting the answer. SQuAD is a single-paragraph QA task, so we adapt SQuAD to the multi-paragraph setting by retrieving distractor paragraphs from Wikipedia for each question. We use the TFIDF retriever from DrQA BIBREF25 to retrieve 2 distractor paragraphs, which we add to the input for one model in the ensemble. We drop words from the question with a 5% probability to help the model handle any ill-formed sub-questions. We use the single-hop QA ensemble as a black-box model once trained, never training the model on multi-hop questions.
Experimental Setup ::: Single-hop Question Answering Model ::: Returned Text
We have the single-hop QA model return the sentence containing the model's predicted answer span, alongside the sub-questions. Later, we compare against alternatives, i.e., returning the predicted answer span without its context or not returning sub-questions.
Experimental Setup ::: Multi-hop Question Answering Model
Our multi-hop QA architecture is identical to the single-hop QA model, but the multi-hop QA model also uses sub-questions and sub-answers as input. We append each (sub-question, sub-answer) pair in order to the multi-hop question along with separator tokens. We train one multi-hop QA model on all of HotpotQA, also including SQuAD 2 examples used to train the single-hop QA model. Later, we experiment with using $\textsc {BERT}_{\textsc {LARGE}}$ and $\textsc {BERT}_{\textsc {BASE}}$ instead of $\textsc {RoBERTa}_{\textsc {LARGE}}$ as the multi-hop QA model. All reported error margins show the mean and std. dev. across 5 multi-hop QA training runs using the same decompositions.
Results on Question Answering
We compare variants of our approach that use different learning methods and different pseudo-aligned training sets. As a baseline, we compare RoBERTa with decompositions to a RoBERTa model that does not use decompositions but is identical in all other respects. We train the baseline for 2 epochs, sweeping over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $; we choose the hyperparameters that perform best on our dev set. We then use the best hyperparameters for the baseline to train our RoBERTa models with decompositions.
We report results on 3 versions of the dev set: (1) the original version, (2) the multi-hop version from BIBREF4 which created some distractor paragraphs adversarially to test multi-hop reasoning, and (3) the out-of-domain version from BIBREF3 which retrieved distractor paragraphs using the same procedure as the original version, but excluded paragraphs in the original version.
Results on Question Answering ::: Main Results
Table shows how unsupervised decompositions affect QA. Our RoBERTa baseline performs quite well on HotpotQA (77.0 F1), despite processing each paragraph separately, which prohibits inter-paragraph reasoning. The result is in line with prior work which found that a version of our baseline QA model using BERT BIBREF26 does well on HotpotQA by exploiting single-hop reasoning shortcuts BIBREF21. We achieve significant gains over our strong baseline by leveraging decompositions from our best decomposition model, trained with USeq2Seq on FastText pseudo-decompositions; we find a 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, and 10 F1 gain on the out-of-domain dev set. Unsupervised decompositions even match the performance of using (within our pipeline) supervised and heuristic decompositions from DecompRC (i.e., 80.1 vs. 79.8 F1 on the original dev set).
More generally, all decomposition methods improve QA over the baseline by leveraging the single-hop QA model (“1hop” in Table ). Using FastText pseudo-decompositions as sub-questions directly improves QA over using random sub-questions on the multi-hop set (72.4 vs. 70.9 F1) and out-of-domain set (72.0 vs. 70.7 F1). USeq2Seq on random pseudo-decompositions also improves over the random sub-question baseline (e.g., 79.8 vs. 78.4 F1 on HotpotQA). However, we only find small improvements when training USeq2Seq on FastText vs. Random pseudo-decompositions (e.g., 77.1 vs. 76.5 F1 on the out-of-domain dev set).
The best decomposition methods learn with USeq2Seq. Using Seq2Seq to generate decompositions gives similar QA accuracy as the “No Learning” setup, e.g. both approaches achieve 78.9 F1 on the original dev set for FastText pseudo-decompositions. The results are similar perhaps since supervised learning is directly trained to place high probability on pseudo-decompositions. USeq2Seq may improve over Seq2Seq by learning to align hard questions and pseudo-decompositions while ignoring the noisy pairing.
After our experimentation, we chose USeq2Seq trained on FastText pseudo-decompositions as the final model, and we submitted the model for hidden test evaluation. Our approach achieved a test F1 of 79.34 and Exact Match (EM) of 66.33. Our approach is competitive with concurrent, state-of-the-art systems SAE BIBREF7 and HGN BIBREF8, which both (unlike our approach) learn from additional, strong supervision about which sentences are necessary to answer the question.
Results on Question Answering ::: Question Type Breakdown
To understand where decompositions help, we break down QA performance across 4 question types from BIBREF3. “Bridge” questions ask about an entity not explicitly mentioned in the question (“When was Erik Watts' father born?”). “Intersection” questions ask to find an entity that satisfies multiple separate conditions (“Who was on CNBC and Fox News?”). “Comparison” questions ask to compare a property of two entities (“Which is taller, Momhil Sar or K2?”). “Single-hop” questions are likely answerable using single-hop shortcuts or single-paragraph reasoning (“Where is Electric Six from?”). We split the original dev set into the 4 types using the supervised type classifier from BIBREF3. Table shows F1 scores for RoBERTa with and without decompositions across the 4 types.
Unsupervised decompositions improve QA across all question types. Our single decomposition model generates useful sub-questions for all question types without special case handling, unlike earlier work from BIBREF3 which handled each question type separately. For single-hop questions, our QA approach does not require falling back to a single-hop QA model and instead learns to leverage decompositions to better answer questions with single-hop shortcuts (76.9 vs. 73.9 F1 without decompositions).
Results on Question Answering ::: Answers to Sub-Questions are Crucial
To measure the usefulness of sub-questions and sub-answers, we train the multi-hop QA model with various, ablated inputs, as shown in Table . Sub-answers are crucial to improving QA, as sub-questions with no answers or random answers do not help (76.9 vs. 77.0 F1 for the baseline). Only when sub-answers are provided do we see improved QA, with or without sub-questions (80.1 and 80.2 F1, respectively). It is important to provide the sentence containing the predicted answer span instead of the answer span alone (80.1 vs. 77.8 F1, respectively), though the answer span alone still improves over the baseline (77.0 F1).
Results on Question Answering ::: How Do Decompositions Help?
Decompositions help to answer questions by retrieving important supporting evidence to answer questions. Fig. FIGREF41 shows that multi-hop QA accuracy increases when the sub-answer sentences are the “supporting facts” or sentences needed to answer the question, as annotated by HotpotQA. We retrieve supporting facts without learning to predict them with strong supervision, unlike many state-of-the-art models BIBREF7, BIBREF8, BIBREF22.
Results on Question Answering ::: Example Decompositions
To illustrate how decompositions help QA, Table shows example sub-questions from our best decomposition model with predicted sub-answers. Sub-questions are single-hop questions relevant to the multi-hop question. The single-hop QA model returns relevant sub-answers, sometimes in spite of grammatical errors (Q1, SQ$_1$) or under-specified questions (Q2, SQ$_1$). The multi-hop QA model then returns an answer consistent with the predicted sub-answers. The decomposition model is largely extractive, copying from the multi-hop question rather than hallucinating new entities, which helps generate relevant sub-questions. To better understand our system, we analyze the model for each stage: decomposition, single-hop QA, and multi-hop QA.
Analysis ::: Unsupervised Decomposition Model ::: Intrinsic Evaluation of Decompositions
We evaluate the quality of decompositions on other metrics aside from downstream QA. To measure the fluency of decompositions, we compute the likelihood of decompositions using the pre-trained GPT-2 language model BIBREF27. We train a classifier on the question-wellformedness dataset of BIBREF28, and we use the classifier to estimate the proportion of sub-questions that are well-formed. We measure how abstractive decompositions are by computing (i) the token Levenstein distance between the multi-hop question and its generated decomposition and (ii) the ratio between the length of the decomposition and the length of the multi-hop question. We compare our best decomposition model against the supervised+heuristic decompositions from DecompRC BIBREF3 in Table .
Unsupervised decompositions are both more natural and well-formed than decompositions from DecompRC. Unsupervised decompositions are also closer in edit distance and length to the multi-hop question, consistent with our observation that our decomposition model is largely extractive.
Analysis ::: Unsupervised Decomposition Model ::: Quality of Decomposition Model
Another way to test the quality of the decomposition model is to test if the model places higher probability on decompositions that are more helpful for downstream QA. We generate $N=5$ hypotheses from our best decomposition model using beam search, and we train a multi-hop QA model to use the $n$th-ranked hypothesis as a question decomposition (Fig. FIGREF46, left). QA accuracy decreases as we use lower probability decompositions, but accuracy remains relatively robust, at most decreasing from 80.1 to 79.3 F1. The limited drop suggests that decompositions are still useful if they are among the model's top hypotheses, another indication that our model is trained well for decomposition.
Analysis ::: Single-hop Question Answering Model ::: Sub-Answer Confidence
Figure FIGREF46 (right) shows that the model's sub-answer confidence correlates with downstream multi-hop QA performance for all HotpotQA dev sets. A low confidence sub-answer may be indicative of (i) an unanswerable or ill-formed sub-question or (ii) a sub-answer that is more likely to be incorrect. In both cases, the single-hop QA model is less likely to retrieve the useful supporting evidence to answer the multi-hop question.
Analysis ::: Single-hop Question Answering Model ::: Changing the Single-hop QA Model
We find that our approach is robust to the single-hop QA model that answers sub-questions. We use the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 as the single-hop QA model. The model performs much worse compared to our $\textsc {RoBERTa}_{\textsc {LARGE}}$ single-hop ensemble when used directly on HotpotQA (56.3 vs. 66.7 F1). However, the model results in comparable QA when used to answer single-hop sub-questions within our larger system (79.9 vs. 80.1 F1 for our $\textsc {RoBERTa}_{\textsc {LARGE}}$ ensemble).
Analysis ::: Multi-hop Question Answering Model ::: Varying the Base Model
To understand how decompositions impact performance as the multi-hop QA model gets stronger, we vary the base pre-trained model. Table shows the impact of adding decompositions to $\textsc {BERT}_{\textsc {BASE}}$ , $\textsc {BERT}_{\textsc {LARGE}}$ , and finally $\textsc {RoBERTa}_{\textsc {LARGE}}$ (see Appendix §SECREF64 for hyperparameters). The gain from using decompositions grows with strength of the multi-hop QA model. Decompositions improve QA by 1.2 F1 for a $\textsc {BERT}_{\textsc {BASE}}$ model, by 2.6 F1 for the stronger $\textsc {BERT}_{\textsc {LARGE}}$ model, and by 3.1 F1 for our best $\textsc {RoBERTa}_{\textsc {LARGE}}$ model.
Related Work
Answering complicated questions has been a long-standing challenge in natural language processing. To this end, prior work has explored decomposing questions with supervision or heuristic algorithms. IBM Watson BIBREF29 decomposes questions into sub-questions in multiple ways or not at all. DecompRC BIBREF3 largely frames sub-questions as extractive spans of a multi-hop question, learning to predict span-based sub-questions via supervised learning on human annotations. In other cases, DecompRC decomposes a multi-hop question using a heuristic algorithm, or DecompRC does not decompose at all. Watson and DecompRC use special case handling to decompose different questions, while our algorithm is fully automated and requires minimal hand-engineering.
More traditional, semantic parsing methods map questions to compositional programs, whose sub-programs can be viewed as question decompositions in a formal language BIBREF2, BIBREF30. Examples include classical QA systems like SHRDLU BIBREF31 and LUNAR BIBREF32, as well as neural Seq2Seq semantic parsers BIBREF33 and neural module networks BIBREF34, BIBREF35. Such methods usually require strong, program-level supervision to generate programs, as in visual QA BIBREF36 and on HotpotQA BIBREF37. Some models use other forms of strong supervision, e.g. predicting the “supporting evidence” to answer a question annotated by HotpotQA. Such an approach is taken by SAE BIBREF7 and HGN BIBREF8, whose methods may be combined with our approach.
Unsupervised decomposition complements strongly and weakly supervised decomposition approaches. Our unsupervised approach enables methods to leverage millions of otherwise unusable questions, similar to work on unsupervised QA BIBREF11. When decomposition examples exist, supervised and unsupervised learning can be used in tandem to learn from both labeled and unlabeled examples. Such semi-supervised methods outperform supervised learning for tasks like machine translation BIBREF38. Other work on weakly supervised question generation uses a downstream QA model's accuracy as a signal for learning to generate useful questions. Weakly supervised question generation often uses reinforcement learning BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, where an unsupervised initialization can greatly mitigate the issues of exploring from scratch BIBREF44.
Conclusion
We proposed an algorithm that decomposes questions without supervision, using 3 stages: (1) learning to decompose using pseudo-decompositions without supervision, (2) answering sub-questions with an off-the-shelf QA system, and (3) answering hard questions more accurately using sub-questions and their answers as additional input. When evaluated on HotpotQA, a standard benchmark for multi-hop QA, our approach significantly improved accuracy over an equivalent model that did not use decompositions. Our approach relies only on the final answer as supervision but works as effectively as state-of-the-art methods that rely on strong supervision, such as supporting fact labels or example decompositions. Qualitatively, we found that unsupervised decomposition resulted in fluent sub-questions whose answers often match the annotated supporting facts in HotpotQA. Our unsupervised decompositions are largely extractive, which is effective for compositional, multi-hop questions but not all complex questions, showing room for future work. Overall, this work opens up exciting avenues for leveraging methods in unsupervised learning and natural language generation to improve the interpretability and generalization of machine learning systems.
Acknowledgements
EP is supported by the NSF Graduate Research Fellowship. KC is supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). KC also thanks eBay and NVIDIA for their support. We thank Paul Christiano, Sebastian Riedel, He He, Jonathan Berant, Alexis Conneau, Jiatao Gu, Sewon Min, Yixin Nie, Lajanugen Logeswaran, and Adam Fisch for helpful feedback, as well as Yichen Jiang and Peng Qi for help with evaluation.
Pseudo-Decompositions
Tables - show examples of pseudo-decompositions and learned decompositions from various models.
Pseudo-Decompositions ::: Variable Length Pseudo-Decompositions
In §SECREF15, we leveraged domain knowledge about the task to fix the pseudo-decomposition length $N=2$. A general algorithm for creating pseudo-decompositions should find a suitable $N$ for each question. We find that Eq. DISPLAY_FORM5 in SECREF4 always results in decompositions of length $N=2$, as the regularization term grows quickly with $N$. Thus, we test another formulation based on Euclidean distance:
We create pseudo-decompositions in an similar way as before, first finding a set of candidate sub-questions $S^{\prime } \in S$ with high cosine similarity to $\mathbf {v}_q$, then performing beam search up to a maximum value of $N$. We test pseudo-decomposition formulations by creating synthetic compositional questions by combining 2-3 single-hop questions with “and.” We then measure the ranking of the correct decomposition (a concatenation of the single-hop questions). For $N=2$, both methods perform well, but Eq. DISPLAY_FORM5 does not work for decompositions where $N=3$, whereas Eq. DISPLAY_FORM53 does, achieving a mean reciprocal rank of 30%. However, Eq. DISPLAY_FORM5 outperforms Eq. DISPLAY_FORM53 on HotpotQA, e.g., achieving 79.9 vs. 79.4 F1 when using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 to answer sub-questions. Eq. DISPLAY_FORM5 is also faster to compute and easier to scale. Moreover, Eq. DISPLAY_FORM53 requires an embedding space where summing sub-question representations is meaningful, whereas Eq. DISPLAY_FORM5 only requires embeddings that encode semantic similarity. Thus, we adopt Eq. DISPLAY_FORM5 for our main experiments. Table contains an example where the variable length decomposition method mentioned above produces a three-subquestion decomposition whereas the other methods are fixed to two subquestions.
Pseudo-Decompositions ::: Impact of Question Corpus Size
In addition to our previous results on FastText vs. Random pseudo-decompositions, we found it important to use a large question corpus to create pseudo-decompositions. QA F1 increased from 79.2 to 80.1 when we trained decomposition models on pseudo-decompositions comprised of questions retrieved from Common Crawl ($>$10M questions) rather than only SQuAD 2 ($\sim $130K questions), using an appropriately larger beam size (100 $\rightarrow $ 1000).
Pseudo-Decompositions ::: Pseudo-Decomposition Retrieval Method
Table shows QA results with pseudo-decompositions retrieved using sum-bag-of-word representations from FastText, TFIDF, $\textsc {BERT}_{\textsc {LARGE}}$ first layer hidden states. We also vary the learning method and include results Curriculum Seq2Seq (CSeq2Seq), where we initialize the USeq2Seq approach with the Seq2Seq model trained on the same data.
Unsupervised Decomposition Model ::: Unsupervised Stopping Criterion
To stop USeq2Seq training, we use an unsupervised stopping criterion to avoid relying on a supervised validation set of decompositions. We generate a decomposition $\hat{d}$ for a multi-hop question $q$, and we measure BLEU between $q$ and the model-generated question $\hat{q}$ for $\hat{d}$, similar to round-trip BLEU in unsupervised translation BIBREF17. We scale round-trip BLEU score by the fraction of “good” decompositions, where a good decomposition has (1) 2 sub-questions (question marks), (2) no sub-question which contains all words in the multi-hop question, and (3) no sub-question longer than the multi-hop question. Without scaling, decomposition models achieve perfect round-trip BLEU by copying the multi-hop question as the decomposition. We measure scaled BLEU across multi-hop questions in HotpotQA dev, and we stop training when the metric does not increase for 3 consecutive epochs.
It is possible to stop training the decomposition model based on downstream QA accuracy. However, training a QA model on each decomposition model checkpoint (1) is computationally expensive and (2) ties decompositions to a specific, downstream QA model. In Figure FIGREF57, we show downstream QA results across various USeq2Seq checkpoints when using the $\textsc {BERT}_{\textsc {BASE}}$ single-hop QA ensemble from BIBREF3. The unsupervised stopping criterion does not significantly hurt downstream QA compared to using a weakly-supervised stopping criterion.
Unsupervised Decomposition Model ::: Training Hyperparameters ::: MLM Pre-training
We pre-train our encoder-decoder distributed across 8 DGX-1 machines, each with 8, 32GB NVIDIA V100 GPUs interconnected by Infiniband. We pre-train using the largest possible batch size (1536), and we choose the best learning rate ($3 \times 10^{-5}$) based on training loss after a small number of iterations. We chose a maximum sequence length of 128. We keep other hyperparameters identical to those from BIBREF6 used in unsupervised translation.
Unsupervised Decomposition Model ::: Training Hyperparameters ::: USeq2Seq
We train each decomposition model with distributed training across 8, 32GB NVIDIA V100 GPUs. We chose the largest possible batch size (256) and then the largest learning rate which resulted in stable training ($3 \times 10^{-5}$). Other hyperparameters are the same as BIBREF6.
Unsupervised Decomposition Model ::: Training Hyperparameters ::: Seq2Seq
We use a large batch size (1024) and chose the largest learning rate which resulted in stable training across many pseudo-decomposition training corpora ($1 \times 10^{-4}$). We keep other training settings and hyperparameters the same as for USeq2Seq.
Multi-hop QA Model ::: Varying the Number of Training Examples
To understand how decompositions impact performance given different amounts of training data, we vary the number of multi-hop training examples. We use the “medium” and “hard” level labels in HotpotQA to determine which examples are multi-hop. We consider training setups where the multi-hop QA model does or does not use data augmentation via training on hotpot “easy”/single-hop questions and SQuAD 2 questions. Fig. FIGREF63 shows the results. Decompositions improve QA, so long as the multi-hop QA model has enough training data, either via single-hop QA examples or enough multi-hop QA examples.
Multi-hop QA Model ::: Training Hyperparameters
To train $\textsc {RoBERTa}_{\textsc {LARGE}}$ , we fix the number of training epochs to 2, as training longer did not help. We sweep over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $, similar to the ranges used in the original paper BIBREF24. We chose the hyperparameters that did best for the baseline QA model (without decompositions) on our validation set: batch size 64, learning rate $1.5 \times 10^{-5}$, and weight decay $0.01$. Similarly, for the experiments with BERT, we fix the number of epochs to 2 and choose hyperparameters by sweeping over the recommended ranges from BIBREF26 for learning rate ($\lbrace 2 \times 10^{-5}, 3 \times 10^{-5}, 5 \times 10^{-5}\rbrace $) and batch size ($\lbrace 16, 32\rbrace $). For $\textsc {BERT}_{\textsc {BASE}}$ , we thus choose learning rate $2 \times 10^{-5}$ and batch size 16, and for $\textsc {BERT}_{\textsc {LARGE}}$ , we use the whole-word masking model with learning rate $2 \times 10^{-5}$ and batch size 32. We train all QA models with mixed precision floating point arithmetic BIBREF45, distributing training across 8, 32GB NVIDIA V100 GPUs.
Multi-hop QA Model ::: Improvements across Detailed Question Types
To better understand where decompositions improve QA, we show the improvement across various fine-grained splits of the evaluation sets in Figures FIGREF66-FIGREF70. | RoBERTa baseline |
046ff04d1018447b22e00acb125125cae5a23fb7 | 046ff04d1018447b22e00acb125125cae5a23fb7_0 | Q: Which dataset do they use?
Text: Introduction
Simultaneous translation is a translation task where the translation process starts before the end of an input. It helps real-time spoken language communications such as human conversations and public talks. A usual machine translation system works in the sentence level and starts its translation process after it reads the end of a sentence. It would not be appropriate for spoken languages due to roughly two issues: (1) sentence boundaries are not clear and (2) a large latency occurs for a long input.
Previous studies tackled this problem by an incremental process, in order to reduce the translation latency for a given input. fujita13interspeech proposed a phrase-based approach to the simultaneous translation based on phrasal reordering probabilities. oda-etal-2015-syntax proposed a syntax-based method to determine when to start translation of observed inputs. Such an approach faces a trade-off between speed and accuracy; reducing the translation latency using very limited context information also causes the loss in the translation accuracy. This becomes more serious especially in a syntactically-distant language pair such as English and Japanese, where we sometimes have to wait a latter part of a source language to determine the corresponding former part in a target language.
Recent neural machine translation (NMT) studies tried an incremental processing for the simultaneous translation. gu2017learning proposed a reinforcement learning approach to determine when to translate based on two different actions: READ to take one input token and WRITE to generate one output token. While they reported some latency reduction without the loss of translation accuracy, the NMT model itself is trained independently from this incremental manner and is not fully optimized for simultaneous translation. ma2018stacl proposed a very simple incremental method called Wait-k, where the decoder starts to generate output tokens after the encoder reads k tokens and then works token-by-token. Here, some required inputs may not be observed by the encoder; however, the decoder has to predict the next output token even in that case. This approach enables a simple end-to-end simultaneous NMT with implicit anticipation of unobserved inputs. It showed high translation accuracy with small latency on some common English-to-German and Chinese-to-English datasets. The latency hyperparameter k can be used to control the speed-accuracy trade-off, but it has to be large enough for a distant language pair like English-Japanese. We observed a problem in translating a phrase longer than k tokens in our pilot study on English-to-Japanese translation.
In this work, we propose a novel incremental NMT method that uses a special token <wait> in the target language which is generated when the translation model chooses to read the next input token instead of generating an output token. The proposed method uses Connectionist Temporal Classification (CTC) BIBREF0 to handle ambiguities in possible positions inserting <wait> in the training time. CTC is applied to sequential model training such as automatic speech recognition, where we have a reference word sequence but do not have the corresponding segmentation or alignment in an acoustic signal. We conduct experiments in English-to-Japanese simultaneous translation with the proposed and baseline methods and show the proposed method achieves a good translation performance with relatively small latency. The proposed method can determine when to wait or translate in an adaptive manner and is useful in simultaneous translation tasks.
Simultaneous machine translation by Wait-k model
First, we review a general NMT model following the formulation by BIBREF1 and the “Wait-k" model BIBREF2 that is the baseline model for simultaneous NMT.
Given a source sentence $X$ and a target sentence $Y$ as follows:
where $\textbf {x}_i \in \mathbb {R}^{S \times 1}$ is a one-hot vector of the i-th input word, $I$ is the length of the input sentence $X$, $\textbf {y}_i \in \mathbb {R}^{T \times 1}$ is a one-hot vector of the i-th output word, and $J$ is the length of the output sentence $Y$.
The problem of translation from the source to the target language can be solved by finding the best target language sentence $\hat{Y}$ that maximizes the conditional probability
In general NMT manner, the conditional probability is decomposed by the product of conditional generation probabilities of $\textbf {y}_{j}$ given the source sentence $X$ and preceding target words $\textbf {y}_{<j}$:
where $\textbf {y}_{<j}$ represents the target words up to position $j$, and $\theta $ indicates the model parameters. In contrast, the model for simultaneous translation has to output translated words given only prefix words of the source sentence. Therefore, the conditional probability is decomposed as follows:
where $\textbf {x}_{<g(j)}$ are the target words up to position $g(j)$ and $g(j)$ represents the number of encoded source tokens when the model outputs $j$ words. In the “Wait-k" model, $g(j)$ is defined as follows:
Here, $k$ is the hyperparameter which indicates the target sentence generation is $k$ tokens behind the source sentence input and it takes a constant value in the “Wait-k" model.
The model is composed of an encoder (§SECREF5) and a decoder with the attention mechanism (§SECREF7) that are both implemented using recurrent neural networks (RNNs); the encoder converts source words into a sequence of vectors, and the decoder generates target language words one-by-one with the attention mechanism based on the conditional probability shown in the equation DISPLAY_FORM2 and DISPLAY_FORM3. The details are described below.
Simultaneous machine translation by Wait-k model ::: Encoder
The encoder takes a sequence of a source sentence $X$ as inputs and returns forward hidden vectors $\overrightarrow{\textbf {h}_i}(1 \le i \le I)$ of the forward RNNs:
In the general NMT model, they also calculate backward hidden vectors of backward RNNs from a reversed source sentence. However, we only use forward hidden vectors because we cannot use the information of the whole sentence on the simultaneous translation task.
Simultaneous machine translation by Wait-k model ::: Decoder with Attention
The decoder takes source hidden vectors as inputs and returns target language words one-by-one with the attention mechanism. The decoder RNNs recurrently generates target words using its hidden state and an output context. The conditional generation probability of the target word $\textbf {y}_i$ defined as follows:
Here, $\textbf {W}_c, \textbf {W}_p$ are trainable parameters and $\textbf {c}_j$ is a context vector to retrieve source language inputs in forms of a weighted sum of the source hidden vectors $\textbf {h}_j$, defined as follows.
The score function above can be defined in some different ways as discussed by BIBREF1. In this paper, we use dot attention for this score function.
Proposed Method
In this work, we proposed the method to decide the output timing adaptively. The proposed method introduces a special token <wait> which is output instead of delaying translation to target-side vocabulary.
In this section, we first review a standard objective function, softmax cross-entropy and show the problem that occurs when this function is applied to <wait> (§SECREF10). After that, we introduce an objective function, called Connectionist Temporal Classification, to handle this problem (§SECREF12). Finally, we propose a new objective function to adjust a trade-off between translation accuracy and latency (§SECREF14) and explain how to combine these objective functions (§SECREF16).
Proposed Method ::: Softmax Cross-Entropy
Softmax Cross-Entropy (SCE) is a commonly used token-level objective function for multi-class classification including word generation in NMT, defined as follows:
where $\textbf {y}_{ij}$ is a j-th element of the one-hot vector corresponding to the i-th words of the reference sentence and $p(\textbf {y}_{jk}|\cdot )$ is the generation probability of $\textbf {y}_{jk}$.
A correct sequence that corresponds to an output sequence one-by-one is necessary to use SCE as an objective function for NMT. However, in the proposed method, we cannot simply use SCE because we don't know when we should cause delay. To avoid this problem, we set the loss for delay tokens to 0 during the time step $t\ (t \le g(I))$ which the model can output <wait> , or while a source sentence is inputted.
Proposed Method ::: Connectionist Temporal Classification
As we mentioned in the previous section, we set the loss value for <wait> to 0, but this causes the problem that it does not optimize about generating <wait> . Against this problem, we use an objective function called Connectionist Temporal Classification (CTC) BIBREF0 for sequence-level optimization.
CTC extends output sequence, called Path $\mathbf {\pi } = \Omega (\textbf {y})$, to the length $T$ by allowing token repetitions and outputting <wait> . Conversely, we can obtain an original output sequence $\textbf {y} = \Omega ^{-1}(\mathbf {\pi })$ by removing <wait> and all token repetitions. The objective function is defined the sum of the probabilities of all possible paths $\mathbf {\pi } \in \Omega (\textbf {y})$ by using the forward-backward algorithm, as follows:
where $\pi _t$ is a t-th element of $\mathbf {\pi }$.
Proposed Method ::: Delay Penalty
Furthermore, we introduce a new objective function, called Delay Penalty, to control latency. We use this function only when an output token causes the delay; that is, when the model outputs <wait> or the same token as a previous one. Delay Penalty is defined by a negative log-likelihood of the probabilities for non-delayed tokens, as follows:
Proposed Method ::: Objective Function
For optimization, we combine three objective functions introduced so far, as follows:
Here, $\alpha $ is a hyperparameter to adjust the amount of latency directly.
Experiments
We conducted simultaneous translation experiments from English to Japanese and discussed accuracy, latency, and issues for translation results.
Experiments ::: Settings
All models were implemented as described in the previous sections using PyTorch. Both the encoders and the decoders were two-layered uni-direcitional LSTM BIBREF3, and the decoder used input feedingBIBREF1. The number of dimensions in word embeddings and hidden vectors was set to 512, and the minibatch size was 64. We use Adam BIBREF4 for optimization with the default parameters. The learning rate was set to $10^{-1}$, and gradient clipping was set to 5. The dropout probability was set to $0.3$. The learning rate was adjusted by a decay factor of $1/\sqrt{2}$ when the validation loss was larger than that in the previous epoch. Then, we chose the best parameter/model with the smallest validation loss for evaluation.
We used two different corpora for the experiments: small_parallel_enja and Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF5. small_parallel_enja is a small-scale corpus that is consist of sentences filtered sentence length 4 to 16 words, and ASPEC is a mid-scale corpus of the scientific paper domain. Table TABREF21 shows their detailed statistics.
All datasets were tokenized into subword unit BIBREF6, BIBREF7 by using Sentencepiece . The source and target language vocabularies were independent, and their size was set to 4000 tokens for small_parallel_enja and 8000 tokens for ASPEC, respectively. We filtered out the sentence whose number of tokens was more than 60 tokens, or the length ratio was more than 9 from the training set.
We used “Wait-k” models and general NMT models as baseline models. General NMT models were attention-based encoder-decoder and it translated sentences from full-length source sentences (called Full Sentence). For evaluation metrics, we used BLEU BIBREF8 and RIBES BIBREF9 to measure translation accuracy, and token-level delay to measure latency. We used Kytea BIBREF10 as a tokenize method for evaluations of Japanese translation accuracy.
Experiments ::: Experiments with Small-scale Corpus
We conducted small-scale experiments using small_parallel_enja. We compared different hyperparameters: $k = \lbrace 3, 5\rbrace $ and $\alpha = \lbrace 0, 0.01, 0.03, 0.05\rbrace $.
Table TABREF24 shows the results in latency and automatic evaluation scores on small_parallel_enja. The full sentence scores are upper bounds of incremental methods. The proposed method reduced the average latency in more than 50% from the full sentence baseline with some loss in BLEU and RIBES. The BLEU and RIBES results by the proposed method were worse than those by Wait-k. Th would be due to some degradation in smaller latency parts that were determined adaptively by the proposed methods while Wait-k keeps the fixed latency.
Experiments ::: Experiments with Mid-scale Corpus
We investigated the performance on longer and more complex sentences by the experiments using ASPEC. We compared different hyperparameters: $k = \lbrace 5, 7\rbrace $ and $\alpha = \lbrace 0.03, 0.05, 0.1\rbrace $.
Table TABREF26 shows the results in latency and automatic evaluation scores on ASPEC. We can see the proposed method showed much larger latency than Wait-k. This is probably due to many long and complex phrases used in scientific articles in ASPEC. Wait-k has to translate such a long phrase without sufficient input observations due to its strict fixed latency strategy. On the other hand, the proposed method can wait for more input tokens adaptively by generating <wait> at the cost of large latency.
Experiments ::: Discussion
In the experimental results above, the proposed method determined the translation latency adaptively, short delay for short and simple inputs as in small_parallel_enja and long delay for long and complex inputs as in ASPEC. Here we discuss our results in detail using some examples.
Table TABREF28 shows translation examples on small_parallel_enja. In the first example, the proposed method gives a correct translation result by adaptive waits. Wait-k generated unrelated words UTF8min野球 (baseball) and UTF8min飲-み (drink) due to the poor input observations with its small fixed latency. The proposed method waited until a subword swim was observed and successfully generate a word UTF8min泳-ぐ (swim).
However, the proposed method sometimes generated consecutive <wait> symbols until the end of input, as shown in the second example. This is probably due to our training strategy; the latency penalty would not be large enough to choose small latency translation at the cost of some increase in SCE- and CTC-based loss. The translation data in the experiments are not from simultaneous interpretation but standard translation, so the current task does not match with the proposed approach. The use of specialized data for simultaneous translation would be important in practice, such as monotonic translations like simultaneous translation.
Conclusion
In this paper, we proposed an adaptive latency control method for simultaneous neural machine translation in syntactically distant language pairs. We introduced a meta token <wait> to wait until the observation of the next input token. We proposed a CTC-based loss function to perform optimization using bilingual data without appropriate positions of <wait> , which is used along with the latency penalty and a standard word prediction loss. The experimental results suggest the proposed method determines when to translate or when to wait in an adaptive manner. Future work includes further analyses on translation accuracy in different latency conditions and time-based latency evaluation instead of the token-based one.
Acknowledgments
A part of this work is supported by JSPS Kakenhi JP17H06101. | small_parallel_enja, Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF5 |
5a06f11aa75a8affde3d595c40fb03e06769e368 | 5a06f11aa75a8affde3d595c40fb03e06769e368_0 | Q: Do they trim the search space of possible output sequences?
Text: Introduction
Simultaneous translation is a translation task where the translation process starts before the end of an input. It helps real-time spoken language communications such as human conversations and public talks. A usual machine translation system works in the sentence level and starts its translation process after it reads the end of a sentence. It would not be appropriate for spoken languages due to roughly two issues: (1) sentence boundaries are not clear and (2) a large latency occurs for a long input.
Previous studies tackled this problem by an incremental process, in order to reduce the translation latency for a given input. fujita13interspeech proposed a phrase-based approach to the simultaneous translation based on phrasal reordering probabilities. oda-etal-2015-syntax proposed a syntax-based method to determine when to start translation of observed inputs. Such an approach faces a trade-off between speed and accuracy; reducing the translation latency using very limited context information also causes the loss in the translation accuracy. This becomes more serious especially in a syntactically-distant language pair such as English and Japanese, where we sometimes have to wait a latter part of a source language to determine the corresponding former part in a target language.
Recent neural machine translation (NMT) studies tried an incremental processing for the simultaneous translation. gu2017learning proposed a reinforcement learning approach to determine when to translate based on two different actions: READ to take one input token and WRITE to generate one output token. While they reported some latency reduction without the loss of translation accuracy, the NMT model itself is trained independently from this incremental manner and is not fully optimized for simultaneous translation. ma2018stacl proposed a very simple incremental method called Wait-k, where the decoder starts to generate output tokens after the encoder reads k tokens and then works token-by-token. Here, some required inputs may not be observed by the encoder; however, the decoder has to predict the next output token even in that case. This approach enables a simple end-to-end simultaneous NMT with implicit anticipation of unobserved inputs. It showed high translation accuracy with small latency on some common English-to-German and Chinese-to-English datasets. The latency hyperparameter k can be used to control the speed-accuracy trade-off, but it has to be large enough for a distant language pair like English-Japanese. We observed a problem in translating a phrase longer than k tokens in our pilot study on English-to-Japanese translation.
In this work, we propose a novel incremental NMT method that uses a special token <wait> in the target language which is generated when the translation model chooses to read the next input token instead of generating an output token. The proposed method uses Connectionist Temporal Classification (CTC) BIBREF0 to handle ambiguities in possible positions inserting <wait> in the training time. CTC is applied to sequential model training such as automatic speech recognition, where we have a reference word sequence but do not have the corresponding segmentation or alignment in an acoustic signal. We conduct experiments in English-to-Japanese simultaneous translation with the proposed and baseline methods and show the proposed method achieves a good translation performance with relatively small latency. The proposed method can determine when to wait or translate in an adaptive manner and is useful in simultaneous translation tasks.
Simultaneous machine translation by Wait-k model
First, we review a general NMT model following the formulation by BIBREF1 and the “Wait-k" model BIBREF2 that is the baseline model for simultaneous NMT.
Given a source sentence $X$ and a target sentence $Y$ as follows:
where $\textbf {x}_i \in \mathbb {R}^{S \times 1}$ is a one-hot vector of the i-th input word, $I$ is the length of the input sentence $X$, $\textbf {y}_i \in \mathbb {R}^{T \times 1}$ is a one-hot vector of the i-th output word, and $J$ is the length of the output sentence $Y$.
The problem of translation from the source to the target language can be solved by finding the best target language sentence $\hat{Y}$ that maximizes the conditional probability
In general NMT manner, the conditional probability is decomposed by the product of conditional generation probabilities of $\textbf {y}_{j}$ given the source sentence $X$ and preceding target words $\textbf {y}_{<j}$:
where $\textbf {y}_{<j}$ represents the target words up to position $j$, and $\theta $ indicates the model parameters. In contrast, the model for simultaneous translation has to output translated words given only prefix words of the source sentence. Therefore, the conditional probability is decomposed as follows:
where $\textbf {x}_{<g(j)}$ are the target words up to position $g(j)$ and $g(j)$ represents the number of encoded source tokens when the model outputs $j$ words. In the “Wait-k" model, $g(j)$ is defined as follows:
Here, $k$ is the hyperparameter which indicates the target sentence generation is $k$ tokens behind the source sentence input and it takes a constant value in the “Wait-k" model.
The model is composed of an encoder (§SECREF5) and a decoder with the attention mechanism (§SECREF7) that are both implemented using recurrent neural networks (RNNs); the encoder converts source words into a sequence of vectors, and the decoder generates target language words one-by-one with the attention mechanism based on the conditional probability shown in the equation DISPLAY_FORM2 and DISPLAY_FORM3. The details are described below.
Simultaneous machine translation by Wait-k model ::: Encoder
The encoder takes a sequence of a source sentence $X$ as inputs and returns forward hidden vectors $\overrightarrow{\textbf {h}_i}(1 \le i \le I)$ of the forward RNNs:
In the general NMT model, they also calculate backward hidden vectors of backward RNNs from a reversed source sentence. However, we only use forward hidden vectors because we cannot use the information of the whole sentence on the simultaneous translation task.
Simultaneous machine translation by Wait-k model ::: Decoder with Attention
The decoder takes source hidden vectors as inputs and returns target language words one-by-one with the attention mechanism. The decoder RNNs recurrently generates target words using its hidden state and an output context. The conditional generation probability of the target word $\textbf {y}_i$ defined as follows:
Here, $\textbf {W}_c, \textbf {W}_p$ are trainable parameters and $\textbf {c}_j$ is a context vector to retrieve source language inputs in forms of a weighted sum of the source hidden vectors $\textbf {h}_j$, defined as follows.
The score function above can be defined in some different ways as discussed by BIBREF1. In this paper, we use dot attention for this score function.
Proposed Method
In this work, we proposed the method to decide the output timing adaptively. The proposed method introduces a special token <wait> which is output instead of delaying translation to target-side vocabulary.
In this section, we first review a standard objective function, softmax cross-entropy and show the problem that occurs when this function is applied to <wait> (§SECREF10). After that, we introduce an objective function, called Connectionist Temporal Classification, to handle this problem (§SECREF12). Finally, we propose a new objective function to adjust a trade-off between translation accuracy and latency (§SECREF14) and explain how to combine these objective functions (§SECREF16).
Proposed Method ::: Softmax Cross-Entropy
Softmax Cross-Entropy (SCE) is a commonly used token-level objective function for multi-class classification including word generation in NMT, defined as follows:
where $\textbf {y}_{ij}$ is a j-th element of the one-hot vector corresponding to the i-th words of the reference sentence and $p(\textbf {y}_{jk}|\cdot )$ is the generation probability of $\textbf {y}_{jk}$.
A correct sequence that corresponds to an output sequence one-by-one is necessary to use SCE as an objective function for NMT. However, in the proposed method, we cannot simply use SCE because we don't know when we should cause delay. To avoid this problem, we set the loss for delay tokens to 0 during the time step $t\ (t \le g(I))$ which the model can output <wait> , or while a source sentence is inputted.
Proposed Method ::: Connectionist Temporal Classification
As we mentioned in the previous section, we set the loss value for <wait> to 0, but this causes the problem that it does not optimize about generating <wait> . Against this problem, we use an objective function called Connectionist Temporal Classification (CTC) BIBREF0 for sequence-level optimization.
CTC extends output sequence, called Path $\mathbf {\pi } = \Omega (\textbf {y})$, to the length $T$ by allowing token repetitions and outputting <wait> . Conversely, we can obtain an original output sequence $\textbf {y} = \Omega ^{-1}(\mathbf {\pi })$ by removing <wait> and all token repetitions. The objective function is defined the sum of the probabilities of all possible paths $\mathbf {\pi } \in \Omega (\textbf {y})$ by using the forward-backward algorithm, as follows:
where $\pi _t$ is a t-th element of $\mathbf {\pi }$.
Proposed Method ::: Delay Penalty
Furthermore, we introduce a new objective function, called Delay Penalty, to control latency. We use this function only when an output token causes the delay; that is, when the model outputs <wait> or the same token as a previous one. Delay Penalty is defined by a negative log-likelihood of the probabilities for non-delayed tokens, as follows:
Proposed Method ::: Objective Function
For optimization, we combine three objective functions introduced so far, as follows:
Here, $\alpha $ is a hyperparameter to adjust the amount of latency directly.
Experiments
We conducted simultaneous translation experiments from English to Japanese and discussed accuracy, latency, and issues for translation results.
Experiments ::: Settings
All models were implemented as described in the previous sections using PyTorch. Both the encoders and the decoders were two-layered uni-direcitional LSTM BIBREF3, and the decoder used input feedingBIBREF1. The number of dimensions in word embeddings and hidden vectors was set to 512, and the minibatch size was 64. We use Adam BIBREF4 for optimization with the default parameters. The learning rate was set to $10^{-1}$, and gradient clipping was set to 5. The dropout probability was set to $0.3$. The learning rate was adjusted by a decay factor of $1/\sqrt{2}$ when the validation loss was larger than that in the previous epoch. Then, we chose the best parameter/model with the smallest validation loss for evaluation.
We used two different corpora for the experiments: small_parallel_enja and Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF5. small_parallel_enja is a small-scale corpus that is consist of sentences filtered sentence length 4 to 16 words, and ASPEC is a mid-scale corpus of the scientific paper domain. Table TABREF21 shows their detailed statistics.
All datasets were tokenized into subword unit BIBREF6, BIBREF7 by using Sentencepiece . The source and target language vocabularies were independent, and their size was set to 4000 tokens for small_parallel_enja and 8000 tokens for ASPEC, respectively. We filtered out the sentence whose number of tokens was more than 60 tokens, or the length ratio was more than 9 from the training set.
We used “Wait-k” models and general NMT models as baseline models. General NMT models were attention-based encoder-decoder and it translated sentences from full-length source sentences (called Full Sentence). For evaluation metrics, we used BLEU BIBREF8 and RIBES BIBREF9 to measure translation accuracy, and token-level delay to measure latency. We used Kytea BIBREF10 as a tokenize method for evaluations of Japanese translation accuracy.
Experiments ::: Experiments with Small-scale Corpus
We conducted small-scale experiments using small_parallel_enja. We compared different hyperparameters: $k = \lbrace 3, 5\rbrace $ and $\alpha = \lbrace 0, 0.01, 0.03, 0.05\rbrace $.
Table TABREF24 shows the results in latency and automatic evaluation scores on small_parallel_enja. The full sentence scores are upper bounds of incremental methods. The proposed method reduced the average latency in more than 50% from the full sentence baseline with some loss in BLEU and RIBES. The BLEU and RIBES results by the proposed method were worse than those by Wait-k. Th would be due to some degradation in smaller latency parts that were determined adaptively by the proposed methods while Wait-k keeps the fixed latency.
Experiments ::: Experiments with Mid-scale Corpus
We investigated the performance on longer and more complex sentences by the experiments using ASPEC. We compared different hyperparameters: $k = \lbrace 5, 7\rbrace $ and $\alpha = \lbrace 0.03, 0.05, 0.1\rbrace $.
Table TABREF26 shows the results in latency and automatic evaluation scores on ASPEC. We can see the proposed method showed much larger latency than Wait-k. This is probably due to many long and complex phrases used in scientific articles in ASPEC. Wait-k has to translate such a long phrase without sufficient input observations due to its strict fixed latency strategy. On the other hand, the proposed method can wait for more input tokens adaptively by generating <wait> at the cost of large latency.
Experiments ::: Discussion
In the experimental results above, the proposed method determined the translation latency adaptively, short delay for short and simple inputs as in small_parallel_enja and long delay for long and complex inputs as in ASPEC. Here we discuss our results in detail using some examples.
Table TABREF28 shows translation examples on small_parallel_enja. In the first example, the proposed method gives a correct translation result by adaptive waits. Wait-k generated unrelated words UTF8min野球 (baseball) and UTF8min飲-み (drink) due to the poor input observations with its small fixed latency. The proposed method waited until a subword swim was observed and successfully generate a word UTF8min泳-ぐ (swim).
However, the proposed method sometimes generated consecutive <wait> symbols until the end of input, as shown in the second example. This is probably due to our training strategy; the latency penalty would not be large enough to choose small latency translation at the cost of some increase in SCE- and CTC-based loss. The translation data in the experiments are not from simultaneous interpretation but standard translation, so the current task does not match with the proposed approach. The use of specialized data for simultaneous translation would be important in practice, such as monotonic translations like simultaneous translation.
Conclusion
In this paper, we proposed an adaptive latency control method for simultaneous neural machine translation in syntactically distant language pairs. We introduced a meta token <wait> to wait until the observation of the next input token. We proposed a CTC-based loss function to perform optimization using bilingual data without appropriate positions of <wait> , which is used along with the latency penalty and a standard word prediction loss. The experimental results suggest the proposed method determines when to translate or when to wait in an adaptive manner. Future work includes further analyses on translation accuracy in different latency conditions and time-based latency evaluation instead of the token-based one.
Acknowledgments
A part of this work is supported by JSPS Kakenhi JP17H06101. | No |
ffbd6f583692db66b719a846ba2b7f6474df481a | ffbd6f583692db66b719a846ba2b7f6474df481a_0 | Q: Which model architecture do they use to build a model?
Text: Introduction
Simultaneous translation is a translation task where the translation process starts before the end of an input. It helps real-time spoken language communications such as human conversations and public talks. A usual machine translation system works in the sentence level and starts its translation process after it reads the end of a sentence. It would not be appropriate for spoken languages due to roughly two issues: (1) sentence boundaries are not clear and (2) a large latency occurs for a long input.
Previous studies tackled this problem by an incremental process, in order to reduce the translation latency for a given input. fujita13interspeech proposed a phrase-based approach to the simultaneous translation based on phrasal reordering probabilities. oda-etal-2015-syntax proposed a syntax-based method to determine when to start translation of observed inputs. Such an approach faces a trade-off between speed and accuracy; reducing the translation latency using very limited context information also causes the loss in the translation accuracy. This becomes more serious especially in a syntactically-distant language pair such as English and Japanese, where we sometimes have to wait a latter part of a source language to determine the corresponding former part in a target language.
Recent neural machine translation (NMT) studies tried an incremental processing for the simultaneous translation. gu2017learning proposed a reinforcement learning approach to determine when to translate based on two different actions: READ to take one input token and WRITE to generate one output token. While they reported some latency reduction without the loss of translation accuracy, the NMT model itself is trained independently from this incremental manner and is not fully optimized for simultaneous translation. ma2018stacl proposed a very simple incremental method called Wait-k, where the decoder starts to generate output tokens after the encoder reads k tokens and then works token-by-token. Here, some required inputs may not be observed by the encoder; however, the decoder has to predict the next output token even in that case. This approach enables a simple end-to-end simultaneous NMT with implicit anticipation of unobserved inputs. It showed high translation accuracy with small latency on some common English-to-German and Chinese-to-English datasets. The latency hyperparameter k can be used to control the speed-accuracy trade-off, but it has to be large enough for a distant language pair like English-Japanese. We observed a problem in translating a phrase longer than k tokens in our pilot study on English-to-Japanese translation.
In this work, we propose a novel incremental NMT method that uses a special token <wait> in the target language which is generated when the translation model chooses to read the next input token instead of generating an output token. The proposed method uses Connectionist Temporal Classification (CTC) BIBREF0 to handle ambiguities in possible positions inserting <wait> in the training time. CTC is applied to sequential model training such as automatic speech recognition, where we have a reference word sequence but do not have the corresponding segmentation or alignment in an acoustic signal. We conduct experiments in English-to-Japanese simultaneous translation with the proposed and baseline methods and show the proposed method achieves a good translation performance with relatively small latency. The proposed method can determine when to wait or translate in an adaptive manner and is useful in simultaneous translation tasks.
Simultaneous machine translation by Wait-k model
First, we review a general NMT model following the formulation by BIBREF1 and the “Wait-k" model BIBREF2 that is the baseline model for simultaneous NMT.
Given a source sentence $X$ and a target sentence $Y$ as follows:
where $\textbf {x}_i \in \mathbb {R}^{S \times 1}$ is a one-hot vector of the i-th input word, $I$ is the length of the input sentence $X$, $\textbf {y}_i \in \mathbb {R}^{T \times 1}$ is a one-hot vector of the i-th output word, and $J$ is the length of the output sentence $Y$.
The problem of translation from the source to the target language can be solved by finding the best target language sentence $\hat{Y}$ that maximizes the conditional probability
In general NMT manner, the conditional probability is decomposed by the product of conditional generation probabilities of $\textbf {y}_{j}$ given the source sentence $X$ and preceding target words $\textbf {y}_{<j}$:
where $\textbf {y}_{<j}$ represents the target words up to position $j$, and $\theta $ indicates the model parameters. In contrast, the model for simultaneous translation has to output translated words given only prefix words of the source sentence. Therefore, the conditional probability is decomposed as follows:
where $\textbf {x}_{<g(j)}$ are the target words up to position $g(j)$ and $g(j)$ represents the number of encoded source tokens when the model outputs $j$ words. In the “Wait-k" model, $g(j)$ is defined as follows:
Here, $k$ is the hyperparameter which indicates the target sentence generation is $k$ tokens behind the source sentence input and it takes a constant value in the “Wait-k" model.
The model is composed of an encoder (§SECREF5) and a decoder with the attention mechanism (§SECREF7) that are both implemented using recurrent neural networks (RNNs); the encoder converts source words into a sequence of vectors, and the decoder generates target language words one-by-one with the attention mechanism based on the conditional probability shown in the equation DISPLAY_FORM2 and DISPLAY_FORM3. The details are described below.
Simultaneous machine translation by Wait-k model ::: Encoder
The encoder takes a sequence of a source sentence $X$ as inputs and returns forward hidden vectors $\overrightarrow{\textbf {h}_i}(1 \le i \le I)$ of the forward RNNs:
In the general NMT model, they also calculate backward hidden vectors of backward RNNs from a reversed source sentence. However, we only use forward hidden vectors because we cannot use the information of the whole sentence on the simultaneous translation task.
Simultaneous machine translation by Wait-k model ::: Decoder with Attention
The decoder takes source hidden vectors as inputs and returns target language words one-by-one with the attention mechanism. The decoder RNNs recurrently generates target words using its hidden state and an output context. The conditional generation probability of the target word $\textbf {y}_i$ defined as follows:
Here, $\textbf {W}_c, \textbf {W}_p$ are trainable parameters and $\textbf {c}_j$ is a context vector to retrieve source language inputs in forms of a weighted sum of the source hidden vectors $\textbf {h}_j$, defined as follows.
The score function above can be defined in some different ways as discussed by BIBREF1. In this paper, we use dot attention for this score function.
Proposed Method
In this work, we proposed the method to decide the output timing adaptively. The proposed method introduces a special token <wait> which is output instead of delaying translation to target-side vocabulary.
In this section, we first review a standard objective function, softmax cross-entropy and show the problem that occurs when this function is applied to <wait> (§SECREF10). After that, we introduce an objective function, called Connectionist Temporal Classification, to handle this problem (§SECREF12). Finally, we propose a new objective function to adjust a trade-off between translation accuracy and latency (§SECREF14) and explain how to combine these objective functions (§SECREF16).
Proposed Method ::: Softmax Cross-Entropy
Softmax Cross-Entropy (SCE) is a commonly used token-level objective function for multi-class classification including word generation in NMT, defined as follows:
where $\textbf {y}_{ij}$ is a j-th element of the one-hot vector corresponding to the i-th words of the reference sentence and $p(\textbf {y}_{jk}|\cdot )$ is the generation probability of $\textbf {y}_{jk}$.
A correct sequence that corresponds to an output sequence one-by-one is necessary to use SCE as an objective function for NMT. However, in the proposed method, we cannot simply use SCE because we don't know when we should cause delay. To avoid this problem, we set the loss for delay tokens to 0 during the time step $t\ (t \le g(I))$ which the model can output <wait> , or while a source sentence is inputted.
Proposed Method ::: Connectionist Temporal Classification
As we mentioned in the previous section, we set the loss value for <wait> to 0, but this causes the problem that it does not optimize about generating <wait> . Against this problem, we use an objective function called Connectionist Temporal Classification (CTC) BIBREF0 for sequence-level optimization.
CTC extends output sequence, called Path $\mathbf {\pi } = \Omega (\textbf {y})$, to the length $T$ by allowing token repetitions and outputting <wait> . Conversely, we can obtain an original output sequence $\textbf {y} = \Omega ^{-1}(\mathbf {\pi })$ by removing <wait> and all token repetitions. The objective function is defined the sum of the probabilities of all possible paths $\mathbf {\pi } \in \Omega (\textbf {y})$ by using the forward-backward algorithm, as follows:
where $\pi _t$ is a t-th element of $\mathbf {\pi }$.
Proposed Method ::: Delay Penalty
Furthermore, we introduce a new objective function, called Delay Penalty, to control latency. We use this function only when an output token causes the delay; that is, when the model outputs <wait> or the same token as a previous one. Delay Penalty is defined by a negative log-likelihood of the probabilities for non-delayed tokens, as follows:
Proposed Method ::: Objective Function
For optimization, we combine three objective functions introduced so far, as follows:
Here, $\alpha $ is a hyperparameter to adjust the amount of latency directly.
Experiments
We conducted simultaneous translation experiments from English to Japanese and discussed accuracy, latency, and issues for translation results.
Experiments ::: Settings
All models were implemented as described in the previous sections using PyTorch. Both the encoders and the decoders were two-layered uni-direcitional LSTM BIBREF3, and the decoder used input feedingBIBREF1. The number of dimensions in word embeddings and hidden vectors was set to 512, and the minibatch size was 64. We use Adam BIBREF4 for optimization with the default parameters. The learning rate was set to $10^{-1}$, and gradient clipping was set to 5. The dropout probability was set to $0.3$. The learning rate was adjusted by a decay factor of $1/\sqrt{2}$ when the validation loss was larger than that in the previous epoch. Then, we chose the best parameter/model with the smallest validation loss for evaluation.
We used two different corpora for the experiments: small_parallel_enja and Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF5. small_parallel_enja is a small-scale corpus that is consist of sentences filtered sentence length 4 to 16 words, and ASPEC is a mid-scale corpus of the scientific paper domain. Table TABREF21 shows their detailed statistics.
All datasets were tokenized into subword unit BIBREF6, BIBREF7 by using Sentencepiece . The source and target language vocabularies were independent, and their size was set to 4000 tokens for small_parallel_enja and 8000 tokens for ASPEC, respectively. We filtered out the sentence whose number of tokens was more than 60 tokens, or the length ratio was more than 9 from the training set.
We used “Wait-k” models and general NMT models as baseline models. General NMT models were attention-based encoder-decoder and it translated sentences from full-length source sentences (called Full Sentence). For evaluation metrics, we used BLEU BIBREF8 and RIBES BIBREF9 to measure translation accuracy, and token-level delay to measure latency. We used Kytea BIBREF10 as a tokenize method for evaluations of Japanese translation accuracy.
Experiments ::: Experiments with Small-scale Corpus
We conducted small-scale experiments using small_parallel_enja. We compared different hyperparameters: $k = \lbrace 3, 5\rbrace $ and $\alpha = \lbrace 0, 0.01, 0.03, 0.05\rbrace $.
Table TABREF24 shows the results in latency and automatic evaluation scores on small_parallel_enja. The full sentence scores are upper bounds of incremental methods. The proposed method reduced the average latency in more than 50% from the full sentence baseline with some loss in BLEU and RIBES. The BLEU and RIBES results by the proposed method were worse than those by Wait-k. Th would be due to some degradation in smaller latency parts that were determined adaptively by the proposed methods while Wait-k keeps the fixed latency.
Experiments ::: Experiments with Mid-scale Corpus
We investigated the performance on longer and more complex sentences by the experiments using ASPEC. We compared different hyperparameters: $k = \lbrace 5, 7\rbrace $ and $\alpha = \lbrace 0.03, 0.05, 0.1\rbrace $.
Table TABREF26 shows the results in latency and automatic evaluation scores on ASPEC. We can see the proposed method showed much larger latency than Wait-k. This is probably due to many long and complex phrases used in scientific articles in ASPEC. Wait-k has to translate such a long phrase without sufficient input observations due to its strict fixed latency strategy. On the other hand, the proposed method can wait for more input tokens adaptively by generating <wait> at the cost of large latency.
Experiments ::: Discussion
In the experimental results above, the proposed method determined the translation latency adaptively, short delay for short and simple inputs as in small_parallel_enja and long delay for long and complex inputs as in ASPEC. Here we discuss our results in detail using some examples.
Table TABREF28 shows translation examples on small_parallel_enja. In the first example, the proposed method gives a correct translation result by adaptive waits. Wait-k generated unrelated words UTF8min野球 (baseball) and UTF8min飲-み (drink) due to the poor input observations with its small fixed latency. The proposed method waited until a subword swim was observed and successfully generate a word UTF8min泳-ぐ (swim).
However, the proposed method sometimes generated consecutive <wait> symbols until the end of input, as shown in the second example. This is probably due to our training strategy; the latency penalty would not be large enough to choose small latency translation at the cost of some increase in SCE- and CTC-based loss. The translation data in the experiments are not from simultaneous interpretation but standard translation, so the current task does not match with the proposed approach. The use of specialized data for simultaneous translation would be important in practice, such as monotonic translations like simultaneous translation.
Conclusion
In this paper, we proposed an adaptive latency control method for simultaneous neural machine translation in syntactically distant language pairs. We introduced a meta token <wait> to wait until the observation of the next input token. We proposed a CTC-based loss function to perform optimization using bilingual data without appropriate positions of <wait> , which is used along with the latency penalty and a standard word prediction loss. The experimental results suggest the proposed method determines when to translate or when to wait in an adaptive manner. Future work includes further analyses on translation accuracy in different latency conditions and time-based latency evaluation instead of the token-based one.
Acknowledgments
A part of this work is supported by JSPS Kakenhi JP17H06101. | model is composed of an encoder (§SECREF5) and a decoder with the attention mechanism (§SECREF7) that are both implemented using recurrent neural networks (RNNs) |
74fe054f5243c8593ddd2c0628f91657246b7dfa | 74fe054f5243c8593ddd2c0628f91657246b7dfa_0 | Q: Do they compare simultaneous translation performance to regular machine translation?
Text: Introduction
Simultaneous translation is a translation task where the translation process starts before the end of an input. It helps real-time spoken language communications such as human conversations and public talks. A usual machine translation system works in the sentence level and starts its translation process after it reads the end of a sentence. It would not be appropriate for spoken languages due to roughly two issues: (1) sentence boundaries are not clear and (2) a large latency occurs for a long input.
Previous studies tackled this problem by an incremental process, in order to reduce the translation latency for a given input. fujita13interspeech proposed a phrase-based approach to the simultaneous translation based on phrasal reordering probabilities. oda-etal-2015-syntax proposed a syntax-based method to determine when to start translation of observed inputs. Such an approach faces a trade-off between speed and accuracy; reducing the translation latency using very limited context information also causes the loss in the translation accuracy. This becomes more serious especially in a syntactically-distant language pair such as English and Japanese, where we sometimes have to wait a latter part of a source language to determine the corresponding former part in a target language.
Recent neural machine translation (NMT) studies tried an incremental processing for the simultaneous translation. gu2017learning proposed a reinforcement learning approach to determine when to translate based on two different actions: READ to take one input token and WRITE to generate one output token. While they reported some latency reduction without the loss of translation accuracy, the NMT model itself is trained independently from this incremental manner and is not fully optimized for simultaneous translation. ma2018stacl proposed a very simple incremental method called Wait-k, where the decoder starts to generate output tokens after the encoder reads k tokens and then works token-by-token. Here, some required inputs may not be observed by the encoder; however, the decoder has to predict the next output token even in that case. This approach enables a simple end-to-end simultaneous NMT with implicit anticipation of unobserved inputs. It showed high translation accuracy with small latency on some common English-to-German and Chinese-to-English datasets. The latency hyperparameter k can be used to control the speed-accuracy trade-off, but it has to be large enough for a distant language pair like English-Japanese. We observed a problem in translating a phrase longer than k tokens in our pilot study on English-to-Japanese translation.
In this work, we propose a novel incremental NMT method that uses a special token <wait> in the target language which is generated when the translation model chooses to read the next input token instead of generating an output token. The proposed method uses Connectionist Temporal Classification (CTC) BIBREF0 to handle ambiguities in possible positions inserting <wait> in the training time. CTC is applied to sequential model training such as automatic speech recognition, where we have a reference word sequence but do not have the corresponding segmentation or alignment in an acoustic signal. We conduct experiments in English-to-Japanese simultaneous translation with the proposed and baseline methods and show the proposed method achieves a good translation performance with relatively small latency. The proposed method can determine when to wait or translate in an adaptive manner and is useful in simultaneous translation tasks.
Simultaneous machine translation by Wait-k model
First, we review a general NMT model following the formulation by BIBREF1 and the “Wait-k" model BIBREF2 that is the baseline model for simultaneous NMT.
Given a source sentence $X$ and a target sentence $Y$ as follows:
where $\textbf {x}_i \in \mathbb {R}^{S \times 1}$ is a one-hot vector of the i-th input word, $I$ is the length of the input sentence $X$, $\textbf {y}_i \in \mathbb {R}^{T \times 1}$ is a one-hot vector of the i-th output word, and $J$ is the length of the output sentence $Y$.
The problem of translation from the source to the target language can be solved by finding the best target language sentence $\hat{Y}$ that maximizes the conditional probability
In general NMT manner, the conditional probability is decomposed by the product of conditional generation probabilities of $\textbf {y}_{j}$ given the source sentence $X$ and preceding target words $\textbf {y}_{<j}$:
where $\textbf {y}_{<j}$ represents the target words up to position $j$, and $\theta $ indicates the model parameters. In contrast, the model for simultaneous translation has to output translated words given only prefix words of the source sentence. Therefore, the conditional probability is decomposed as follows:
where $\textbf {x}_{<g(j)}$ are the target words up to position $g(j)$ and $g(j)$ represents the number of encoded source tokens when the model outputs $j$ words. In the “Wait-k" model, $g(j)$ is defined as follows:
Here, $k$ is the hyperparameter which indicates the target sentence generation is $k$ tokens behind the source sentence input and it takes a constant value in the “Wait-k" model.
The model is composed of an encoder (§SECREF5) and a decoder with the attention mechanism (§SECREF7) that are both implemented using recurrent neural networks (RNNs); the encoder converts source words into a sequence of vectors, and the decoder generates target language words one-by-one with the attention mechanism based on the conditional probability shown in the equation DISPLAY_FORM2 and DISPLAY_FORM3. The details are described below.
Simultaneous machine translation by Wait-k model ::: Encoder
The encoder takes a sequence of a source sentence $X$ as inputs and returns forward hidden vectors $\overrightarrow{\textbf {h}_i}(1 \le i \le I)$ of the forward RNNs:
In the general NMT model, they also calculate backward hidden vectors of backward RNNs from a reversed source sentence. However, we only use forward hidden vectors because we cannot use the information of the whole sentence on the simultaneous translation task.
Simultaneous machine translation by Wait-k model ::: Decoder with Attention
The decoder takes source hidden vectors as inputs and returns target language words one-by-one with the attention mechanism. The decoder RNNs recurrently generates target words using its hidden state and an output context. The conditional generation probability of the target word $\textbf {y}_i$ defined as follows:
Here, $\textbf {W}_c, \textbf {W}_p$ are trainable parameters and $\textbf {c}_j$ is a context vector to retrieve source language inputs in forms of a weighted sum of the source hidden vectors $\textbf {h}_j$, defined as follows.
The score function above can be defined in some different ways as discussed by BIBREF1. In this paper, we use dot attention for this score function.
Proposed Method
In this work, we proposed the method to decide the output timing adaptively. The proposed method introduces a special token <wait> which is output instead of delaying translation to target-side vocabulary.
In this section, we first review a standard objective function, softmax cross-entropy and show the problem that occurs when this function is applied to <wait> (§SECREF10). After that, we introduce an objective function, called Connectionist Temporal Classification, to handle this problem (§SECREF12). Finally, we propose a new objective function to adjust a trade-off between translation accuracy and latency (§SECREF14) and explain how to combine these objective functions (§SECREF16).
Proposed Method ::: Softmax Cross-Entropy
Softmax Cross-Entropy (SCE) is a commonly used token-level objective function for multi-class classification including word generation in NMT, defined as follows:
where $\textbf {y}_{ij}$ is a j-th element of the one-hot vector corresponding to the i-th words of the reference sentence and $p(\textbf {y}_{jk}|\cdot )$ is the generation probability of $\textbf {y}_{jk}$.
A correct sequence that corresponds to an output sequence one-by-one is necessary to use SCE as an objective function for NMT. However, in the proposed method, we cannot simply use SCE because we don't know when we should cause delay. To avoid this problem, we set the loss for delay tokens to 0 during the time step $t\ (t \le g(I))$ which the model can output <wait> , or while a source sentence is inputted.
Proposed Method ::: Connectionist Temporal Classification
As we mentioned in the previous section, we set the loss value for <wait> to 0, but this causes the problem that it does not optimize about generating <wait> . Against this problem, we use an objective function called Connectionist Temporal Classification (CTC) BIBREF0 for sequence-level optimization.
CTC extends output sequence, called Path $\mathbf {\pi } = \Omega (\textbf {y})$, to the length $T$ by allowing token repetitions and outputting <wait> . Conversely, we can obtain an original output sequence $\textbf {y} = \Omega ^{-1}(\mathbf {\pi })$ by removing <wait> and all token repetitions. The objective function is defined the sum of the probabilities of all possible paths $\mathbf {\pi } \in \Omega (\textbf {y})$ by using the forward-backward algorithm, as follows:
where $\pi _t$ is a t-th element of $\mathbf {\pi }$.
Proposed Method ::: Delay Penalty
Furthermore, we introduce a new objective function, called Delay Penalty, to control latency. We use this function only when an output token causes the delay; that is, when the model outputs <wait> or the same token as a previous one. Delay Penalty is defined by a negative log-likelihood of the probabilities for non-delayed tokens, as follows:
Proposed Method ::: Objective Function
For optimization, we combine three objective functions introduced so far, as follows:
Here, $\alpha $ is a hyperparameter to adjust the amount of latency directly.
Experiments
We conducted simultaneous translation experiments from English to Japanese and discussed accuracy, latency, and issues for translation results.
Experiments ::: Settings
All models were implemented as described in the previous sections using PyTorch. Both the encoders and the decoders were two-layered uni-direcitional LSTM BIBREF3, and the decoder used input feedingBIBREF1. The number of dimensions in word embeddings and hidden vectors was set to 512, and the minibatch size was 64. We use Adam BIBREF4 for optimization with the default parameters. The learning rate was set to $10^{-1}$, and gradient clipping was set to 5. The dropout probability was set to $0.3$. The learning rate was adjusted by a decay factor of $1/\sqrt{2}$ when the validation loss was larger than that in the previous epoch. Then, we chose the best parameter/model with the smallest validation loss for evaluation.
We used two different corpora for the experiments: small_parallel_enja and Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF5. small_parallel_enja is a small-scale corpus that is consist of sentences filtered sentence length 4 to 16 words, and ASPEC is a mid-scale corpus of the scientific paper domain. Table TABREF21 shows their detailed statistics.
All datasets were tokenized into subword unit BIBREF6, BIBREF7 by using Sentencepiece . The source and target language vocabularies were independent, and their size was set to 4000 tokens for small_parallel_enja and 8000 tokens for ASPEC, respectively. We filtered out the sentence whose number of tokens was more than 60 tokens, or the length ratio was more than 9 from the training set.
We used “Wait-k” models and general NMT models as baseline models. General NMT models were attention-based encoder-decoder and it translated sentences from full-length source sentences (called Full Sentence). For evaluation metrics, we used BLEU BIBREF8 and RIBES BIBREF9 to measure translation accuracy, and token-level delay to measure latency. We used Kytea BIBREF10 as a tokenize method for evaluations of Japanese translation accuracy.
Experiments ::: Experiments with Small-scale Corpus
We conducted small-scale experiments using small_parallel_enja. We compared different hyperparameters: $k = \lbrace 3, 5\rbrace $ and $\alpha = \lbrace 0, 0.01, 0.03, 0.05\rbrace $.
Table TABREF24 shows the results in latency and automatic evaluation scores on small_parallel_enja. The full sentence scores are upper bounds of incremental methods. The proposed method reduced the average latency in more than 50% from the full sentence baseline with some loss in BLEU and RIBES. The BLEU and RIBES results by the proposed method were worse than those by Wait-k. Th would be due to some degradation in smaller latency parts that were determined adaptively by the proposed methods while Wait-k keeps the fixed latency.
Experiments ::: Experiments with Mid-scale Corpus
We investigated the performance on longer and more complex sentences by the experiments using ASPEC. We compared different hyperparameters: $k = \lbrace 5, 7\rbrace $ and $\alpha = \lbrace 0.03, 0.05, 0.1\rbrace $.
Table TABREF26 shows the results in latency and automatic evaluation scores on ASPEC. We can see the proposed method showed much larger latency than Wait-k. This is probably due to many long and complex phrases used in scientific articles in ASPEC. Wait-k has to translate such a long phrase without sufficient input observations due to its strict fixed latency strategy. On the other hand, the proposed method can wait for more input tokens adaptively by generating <wait> at the cost of large latency.
Experiments ::: Discussion
In the experimental results above, the proposed method determined the translation latency adaptively, short delay for short and simple inputs as in small_parallel_enja and long delay for long and complex inputs as in ASPEC. Here we discuss our results in detail using some examples.
Table TABREF28 shows translation examples on small_parallel_enja. In the first example, the proposed method gives a correct translation result by adaptive waits. Wait-k generated unrelated words UTF8min野球 (baseball) and UTF8min飲-み (drink) due to the poor input observations with its small fixed latency. The proposed method waited until a subword swim was observed and successfully generate a word UTF8min泳-ぐ (swim).
However, the proposed method sometimes generated consecutive <wait> symbols until the end of input, as shown in the second example. This is probably due to our training strategy; the latency penalty would not be large enough to choose small latency translation at the cost of some increase in SCE- and CTC-based loss. The translation data in the experiments are not from simultaneous interpretation but standard translation, so the current task does not match with the proposed approach. The use of specialized data for simultaneous translation would be important in practice, such as monotonic translations like simultaneous translation.
Conclusion
In this paper, we proposed an adaptive latency control method for simultaneous neural machine translation in syntactically distant language pairs. We introduced a meta token <wait> to wait until the observation of the next input token. We proposed a CTC-based loss function to perform optimization using bilingual data without appropriate positions of <wait> , which is used along with the latency penalty and a standard word prediction loss. The experimental results suggest the proposed method determines when to translate or when to wait in an adaptive manner. Future work includes further analyses on translation accuracy in different latency conditions and time-based latency evaluation instead of the token-based one.
Acknowledgments
A part of this work is supported by JSPS Kakenhi JP17H06101. | No |
cc2b98b46497c71e955e844fb36e9ef6e2784640 | cc2b98b46497c71e955e844fb36e9ef6e2784640_0 | Q: Which metrics do they use to evaluate simultaneous translation?
Text: Introduction
Simultaneous translation is a translation task where the translation process starts before the end of an input. It helps real-time spoken language communications such as human conversations and public talks. A usual machine translation system works in the sentence level and starts its translation process after it reads the end of a sentence. It would not be appropriate for spoken languages due to roughly two issues: (1) sentence boundaries are not clear and (2) a large latency occurs for a long input.
Previous studies tackled this problem by an incremental process, in order to reduce the translation latency for a given input. fujita13interspeech proposed a phrase-based approach to the simultaneous translation based on phrasal reordering probabilities. oda-etal-2015-syntax proposed a syntax-based method to determine when to start translation of observed inputs. Such an approach faces a trade-off between speed and accuracy; reducing the translation latency using very limited context information also causes the loss in the translation accuracy. This becomes more serious especially in a syntactically-distant language pair such as English and Japanese, where we sometimes have to wait a latter part of a source language to determine the corresponding former part in a target language.
Recent neural machine translation (NMT) studies tried an incremental processing for the simultaneous translation. gu2017learning proposed a reinforcement learning approach to determine when to translate based on two different actions: READ to take one input token and WRITE to generate one output token. While they reported some latency reduction without the loss of translation accuracy, the NMT model itself is trained independently from this incremental manner and is not fully optimized for simultaneous translation. ma2018stacl proposed a very simple incremental method called Wait-k, where the decoder starts to generate output tokens after the encoder reads k tokens and then works token-by-token. Here, some required inputs may not be observed by the encoder; however, the decoder has to predict the next output token even in that case. This approach enables a simple end-to-end simultaneous NMT with implicit anticipation of unobserved inputs. It showed high translation accuracy with small latency on some common English-to-German and Chinese-to-English datasets. The latency hyperparameter k can be used to control the speed-accuracy trade-off, but it has to be large enough for a distant language pair like English-Japanese. We observed a problem in translating a phrase longer than k tokens in our pilot study on English-to-Japanese translation.
In this work, we propose a novel incremental NMT method that uses a special token <wait> in the target language which is generated when the translation model chooses to read the next input token instead of generating an output token. The proposed method uses Connectionist Temporal Classification (CTC) BIBREF0 to handle ambiguities in possible positions inserting <wait> in the training time. CTC is applied to sequential model training such as automatic speech recognition, where we have a reference word sequence but do not have the corresponding segmentation or alignment in an acoustic signal. We conduct experiments in English-to-Japanese simultaneous translation with the proposed and baseline methods and show the proposed method achieves a good translation performance with relatively small latency. The proposed method can determine when to wait or translate in an adaptive manner and is useful in simultaneous translation tasks.
Simultaneous machine translation by Wait-k model
First, we review a general NMT model following the formulation by BIBREF1 and the “Wait-k" model BIBREF2 that is the baseline model for simultaneous NMT.
Given a source sentence $X$ and a target sentence $Y$ as follows:
where $\textbf {x}_i \in \mathbb {R}^{S \times 1}$ is a one-hot vector of the i-th input word, $I$ is the length of the input sentence $X$, $\textbf {y}_i \in \mathbb {R}^{T \times 1}$ is a one-hot vector of the i-th output word, and $J$ is the length of the output sentence $Y$.
The problem of translation from the source to the target language can be solved by finding the best target language sentence $\hat{Y}$ that maximizes the conditional probability
In general NMT manner, the conditional probability is decomposed by the product of conditional generation probabilities of $\textbf {y}_{j}$ given the source sentence $X$ and preceding target words $\textbf {y}_{<j}$:
where $\textbf {y}_{<j}$ represents the target words up to position $j$, and $\theta $ indicates the model parameters. In contrast, the model for simultaneous translation has to output translated words given only prefix words of the source sentence. Therefore, the conditional probability is decomposed as follows:
where $\textbf {x}_{<g(j)}$ are the target words up to position $g(j)$ and $g(j)$ represents the number of encoded source tokens when the model outputs $j$ words. In the “Wait-k" model, $g(j)$ is defined as follows:
Here, $k$ is the hyperparameter which indicates the target sentence generation is $k$ tokens behind the source sentence input and it takes a constant value in the “Wait-k" model.
The model is composed of an encoder (§SECREF5) and a decoder with the attention mechanism (§SECREF7) that are both implemented using recurrent neural networks (RNNs); the encoder converts source words into a sequence of vectors, and the decoder generates target language words one-by-one with the attention mechanism based on the conditional probability shown in the equation DISPLAY_FORM2 and DISPLAY_FORM3. The details are described below.
Simultaneous machine translation by Wait-k model ::: Encoder
The encoder takes a sequence of a source sentence $X$ as inputs and returns forward hidden vectors $\overrightarrow{\textbf {h}_i}(1 \le i \le I)$ of the forward RNNs:
In the general NMT model, they also calculate backward hidden vectors of backward RNNs from a reversed source sentence. However, we only use forward hidden vectors because we cannot use the information of the whole sentence on the simultaneous translation task.
Simultaneous machine translation by Wait-k model ::: Decoder with Attention
The decoder takes source hidden vectors as inputs and returns target language words one-by-one with the attention mechanism. The decoder RNNs recurrently generates target words using its hidden state and an output context. The conditional generation probability of the target word $\textbf {y}_i$ defined as follows:
Here, $\textbf {W}_c, \textbf {W}_p$ are trainable parameters and $\textbf {c}_j$ is a context vector to retrieve source language inputs in forms of a weighted sum of the source hidden vectors $\textbf {h}_j$, defined as follows.
The score function above can be defined in some different ways as discussed by BIBREF1. In this paper, we use dot attention for this score function.
Proposed Method
In this work, we proposed the method to decide the output timing adaptively. The proposed method introduces a special token <wait> which is output instead of delaying translation to target-side vocabulary.
In this section, we first review a standard objective function, softmax cross-entropy and show the problem that occurs when this function is applied to <wait> (§SECREF10). After that, we introduce an objective function, called Connectionist Temporal Classification, to handle this problem (§SECREF12). Finally, we propose a new objective function to adjust a trade-off between translation accuracy and latency (§SECREF14) and explain how to combine these objective functions (§SECREF16).
Proposed Method ::: Softmax Cross-Entropy
Softmax Cross-Entropy (SCE) is a commonly used token-level objective function for multi-class classification including word generation in NMT, defined as follows:
where $\textbf {y}_{ij}$ is a j-th element of the one-hot vector corresponding to the i-th words of the reference sentence and $p(\textbf {y}_{jk}|\cdot )$ is the generation probability of $\textbf {y}_{jk}$.
A correct sequence that corresponds to an output sequence one-by-one is necessary to use SCE as an objective function for NMT. However, in the proposed method, we cannot simply use SCE because we don't know when we should cause delay. To avoid this problem, we set the loss for delay tokens to 0 during the time step $t\ (t \le g(I))$ which the model can output <wait> , or while a source sentence is inputted.
Proposed Method ::: Connectionist Temporal Classification
As we mentioned in the previous section, we set the loss value for <wait> to 0, but this causes the problem that it does not optimize about generating <wait> . Against this problem, we use an objective function called Connectionist Temporal Classification (CTC) BIBREF0 for sequence-level optimization.
CTC extends output sequence, called Path $\mathbf {\pi } = \Omega (\textbf {y})$, to the length $T$ by allowing token repetitions and outputting <wait> . Conversely, we can obtain an original output sequence $\textbf {y} = \Omega ^{-1}(\mathbf {\pi })$ by removing <wait> and all token repetitions. The objective function is defined the sum of the probabilities of all possible paths $\mathbf {\pi } \in \Omega (\textbf {y})$ by using the forward-backward algorithm, as follows:
where $\pi _t$ is a t-th element of $\mathbf {\pi }$.
Proposed Method ::: Delay Penalty
Furthermore, we introduce a new objective function, called Delay Penalty, to control latency. We use this function only when an output token causes the delay; that is, when the model outputs <wait> or the same token as a previous one. Delay Penalty is defined by a negative log-likelihood of the probabilities for non-delayed tokens, as follows:
Proposed Method ::: Objective Function
For optimization, we combine three objective functions introduced so far, as follows:
Here, $\alpha $ is a hyperparameter to adjust the amount of latency directly.
Experiments
We conducted simultaneous translation experiments from English to Japanese and discussed accuracy, latency, and issues for translation results.
Experiments ::: Settings
All models were implemented as described in the previous sections using PyTorch. Both the encoders and the decoders were two-layered uni-direcitional LSTM BIBREF3, and the decoder used input feedingBIBREF1. The number of dimensions in word embeddings and hidden vectors was set to 512, and the minibatch size was 64. We use Adam BIBREF4 for optimization with the default parameters. The learning rate was set to $10^{-1}$, and gradient clipping was set to 5. The dropout probability was set to $0.3$. The learning rate was adjusted by a decay factor of $1/\sqrt{2}$ when the validation loss was larger than that in the previous epoch. Then, we chose the best parameter/model with the smallest validation loss for evaluation.
We used two different corpora for the experiments: small_parallel_enja and Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF5. small_parallel_enja is a small-scale corpus that is consist of sentences filtered sentence length 4 to 16 words, and ASPEC is a mid-scale corpus of the scientific paper domain. Table TABREF21 shows their detailed statistics.
All datasets were tokenized into subword unit BIBREF6, BIBREF7 by using Sentencepiece . The source and target language vocabularies were independent, and their size was set to 4000 tokens for small_parallel_enja and 8000 tokens for ASPEC, respectively. We filtered out the sentence whose number of tokens was more than 60 tokens, or the length ratio was more than 9 from the training set.
We used “Wait-k” models and general NMT models as baseline models. General NMT models were attention-based encoder-decoder and it translated sentences from full-length source sentences (called Full Sentence). For evaluation metrics, we used BLEU BIBREF8 and RIBES BIBREF9 to measure translation accuracy, and token-level delay to measure latency. We used Kytea BIBREF10 as a tokenize method for evaluations of Japanese translation accuracy.
Experiments ::: Experiments with Small-scale Corpus
We conducted small-scale experiments using small_parallel_enja. We compared different hyperparameters: $k = \lbrace 3, 5\rbrace $ and $\alpha = \lbrace 0, 0.01, 0.03, 0.05\rbrace $.
Table TABREF24 shows the results in latency and automatic evaluation scores on small_parallel_enja. The full sentence scores are upper bounds of incremental methods. The proposed method reduced the average latency in more than 50% from the full sentence baseline with some loss in BLEU and RIBES. The BLEU and RIBES results by the proposed method were worse than those by Wait-k. Th would be due to some degradation in smaller latency parts that were determined adaptively by the proposed methods while Wait-k keeps the fixed latency.
Experiments ::: Experiments with Mid-scale Corpus
We investigated the performance on longer and more complex sentences by the experiments using ASPEC. We compared different hyperparameters: $k = \lbrace 5, 7\rbrace $ and $\alpha = \lbrace 0.03, 0.05, 0.1\rbrace $.
Table TABREF26 shows the results in latency and automatic evaluation scores on ASPEC. We can see the proposed method showed much larger latency than Wait-k. This is probably due to many long and complex phrases used in scientific articles in ASPEC. Wait-k has to translate such a long phrase without sufficient input observations due to its strict fixed latency strategy. On the other hand, the proposed method can wait for more input tokens adaptively by generating <wait> at the cost of large latency.
Experiments ::: Discussion
In the experimental results above, the proposed method determined the translation latency adaptively, short delay for short and simple inputs as in small_parallel_enja and long delay for long and complex inputs as in ASPEC. Here we discuss our results in detail using some examples.
Table TABREF28 shows translation examples on small_parallel_enja. In the first example, the proposed method gives a correct translation result by adaptive waits. Wait-k generated unrelated words UTF8min野球 (baseball) and UTF8min飲-み (drink) due to the poor input observations with its small fixed latency. The proposed method waited until a subword swim was observed and successfully generate a word UTF8min泳-ぐ (swim).
However, the proposed method sometimes generated consecutive <wait> symbols until the end of input, as shown in the second example. This is probably due to our training strategy; the latency penalty would not be large enough to choose small latency translation at the cost of some increase in SCE- and CTC-based loss. The translation data in the experiments are not from simultaneous interpretation but standard translation, so the current task does not match with the proposed approach. The use of specialized data for simultaneous translation would be important in practice, such as monotonic translations like simultaneous translation.
Conclusion
In this paper, we proposed an adaptive latency control method for simultaneous neural machine translation in syntactically distant language pairs. We introduced a meta token <wait> to wait until the observation of the next input token. We proposed a CTC-based loss function to perform optimization using bilingual data without appropriate positions of <wait> , which is used along with the latency penalty and a standard word prediction loss. The experimental results suggest the proposed method determines when to translate or when to wait in an adaptive manner. Future work includes further analyses on translation accuracy in different latency conditions and time-based latency evaluation instead of the token-based one.
Acknowledgments
A part of this work is supported by JSPS Kakenhi JP17H06101. | BLEU BIBREF8, RIBES BIBREF9, token-level delay |
6959e87cf2668a03854da3f042c87e6fdb2ade8a | 6959e87cf2668a03854da3f042c87e6fdb2ade8a_0 | Q: How big are FigureQA and DVQA datasets?
Text: Related Work
Datasets: Over the past few years several large scale datasets for Visual Question Answering have been released. These include datasets such as COCO-QA BIBREF3, DAQUAR BIBREF4, VQA BIBREF5, BIBREF6 which contain questions asked over natural images. On the other hand, datasets such as CLEVR BIBREF7 and NVLR BIBREF8 contain complex reasoning based questions on synthetic images having 2D and 3D geometric objects. There are some datasets BIBREF9, BIBREF10 which contain questions asked over diagrams found in text books but these datasets are smaller and contain multiple-choice questions. FigureSeer BIBREF11 is another dataset which contains images extracted from research papers but this is also a relatively small (60,000 images) dataset. Further, FigureSeer focuses on answering questions based on line plots as opposed to other types of plots such as bar charts, scatter plots, etc. as seen in FigureQA BIBREF0 and DVQA BIBREF1.
Models: The availability of the above mentioned datasets has facilitated the development of complex end-to-end neural network based models (BIBREF2, BIBREF12, BIBREF13, BIBREF14, BIBREF15). These end-to-end networks contain (a) encoders to compute a representation for the image and the question, (b) attention mechanisms to focus on important parts of the question and image, (c) interaction components to capture the interactions between the question and the image, and (d) a classification layer for selecting the answer from a fixed vocabulary. By design, these algorithms cannot be used in situations where the answer does not come from a fixed vocabulary but needs to be computed.
The PlotQA dataset
In this section, we describe the PlotQA dataset and the process to build it. Specifically, we discuss the four main stages, viz., (i) curating data such as year-wise rainfall statistics, country-wise mortality rates, etc., (ii) creating different types of plots with a variation in the number of elements, legend positions, fonts, etc., (iii) crowd-sourcing to generate questions, and (iv) extracting templates from the crowd-sourced questions and instantiating these templates using appropriate phrasing suggested by human annotators.
The PlotQA dataset ::: Data Collection and Curation
We considered online data sources such as World Bank Open Data , Open Government Data , Global Terrorism Database , etc. which contain statistics about various indicator variables such as fertility rate, rainfall, coal production, etc. across years, countries, districts, etc. We crawled data from these sources to extract different variables whose relations could then be plotted (for example, rainfall v/s years across countries, or movie v/s budget, or carbohydrates v/s food_item and so on). Some statistics about the crawled data are of interest. There are a total of 841 unique indicator variables (CO2 emission, Air Quality Index, Fertility Rate, Revenue generated by taxes, etc.) with 160 unique entities (cities, states, districts, countries, movies, food items, etc.). The data ranges from 1960 to 2016, though not all indicator variables have data items for all years. The data contains positive integers, floating point values, percentages, and values on a linear scale. These values range from 0 to $3.50\mathrm {e+}15$.
The PlotQA dataset ::: Plot Generation
We included 3 different types of plots in this dataset, viz., bar plots, line plots and scatter plots. Within bar plots, we have grouped them by orientation as either horizontal or vertical. Within the data sources we explored, we did not find enough data to create certain other types of plots such as Venn diagrams and pie charts which are used in specific settings. We also do not consider composite plots such as Pareto charts which have line graphs on top of bar graphs. Lastly, all the plots in our dataset contain only 2-axes. Figure FIGREF9 shows one sample of each plot type in PlotQA. Each of these plots can compactly represent 3-dimensional data. For instance, in Figure FIGREF9, the plot compares the indicator variable diesel prices across years for different countries. To enable the development of supervised modules for various sub-tasks we provide bounding box annotations for legend boxes, legend names, legend markers, axes titles, axes ticks, bars, lines, and plot title. By using different combination of indicator variables and entities (years, countries, etc.) we created a total of $224,377$ plots.
To ensure that there is enough variety in the plots, we randomly chose the following parameters: grid lines (present/absent), font size, notation used for tick labels (scientific-E notation or standard notation), line style (solid, dashed, dotted, dash-dot), marker styles for marking data points (asterisk, circle, diamond, square, triangle, inverted triangle), position of legends (bottom-left, bottom-centre, bottom-right, center-right, top-right), and colors for the lines and bars from a set of 73 different colors. The number of discrete elements on the $x$-axis varies from 2 to 12. Similarly, the number of entries in the legend box varies from 1 to 4. In other words, in the case of line plots, the number of lines varies from 1 to 4 and in the case of grouped bars the number of bars grouped on a single $x$-tick varies from 1 to 4. For example, for the plots in Figure FIGREF9, the number of discrete elements on the $x$-axis is 6 and the number of legend names (i.e., number of lines) is 3.
The PlotQA dataset ::: Sample Question Collection by Crowd-sourcing
Since the underlying data of the PlotQA dataset is much richer in comparison to FigureQA and DVQA, we found it necessary to ask a wider set of annotators to create questions over these plots. However, creating questions for all the plots in our dataset would have been prohibitively expensive. We sampled $1,400$ plots across different types and asked workers on Amazon Mechanical Turk to create questions for these plots. We showed each plot to 5 different workers resulting in a total of $7,000$ questions. We specifically instructed the workers to ask complex reasoning questions which involved reference to multiple plot elements in the plots. We also gave the workers a list of simple questions such as “Is this a vertical bar graph?”, “What is the title of the graph?”, “What is the value of coal production in 1993?” and asked them to not create such questions as we had already created such questions using hand designed templates. We paid the workers $0.1\$$ for each question.
The PlotQA dataset ::: Question Template Extraction & Instantiation
We manually analyzed the questions collected by crowdsourcing and divided them into a total of 74 templates (including the simple templates that we had manually designed as mentioned earlier). These templates were divided into 3 question categories. These question categories along with a few sample templates are shown below. We refer the reader to the Supplementary material for further details.
Structural Understanding: These are questions about the overall structure of the plot and do not require any quantitative reasoning. Examples: “How many different coloured bars are there?”, “How are the legend labels stacked?”.
Data Retrieval: These questions seek data item for a single element in the plot. Examples: “What is the number of tax payers in Myanmar in 2015?”, “How many bars are there on the 4th tick from the top?”.
Reasoning: These questions either require numeric reasoning over multiple plot elements or a comparative analysis of different elements of the plot, or a combination of both to answer the question. Examples: “In which country is the number of threatened bird species minimum?”, “What is the median banana production?”, “What is the difference between the number of deaths in Bulgaria and Cuba in the year 2005?”, “In how many years, is the rice production greater than the average rice production over all years?”.
We abstracted the questions into templates such as “In how many $<$plural form of X_label$>$, is the $<$Y_label$>$ of/in $<$legend_label$>$ greater than the average $<$Y_label$>$ of/in $<$legend_label$>$ taken over all $<$plural form of X_label$>$?”. We could then generate multiple questions for each template by replacing X_label, Y_label, legend_label, etc. by indicator variables, years, cities etc. from our curated data. However, this was a tedious task requiring a lot of manual intervention. For example, consider the indicator variable “Race of students” in Figure FIGREF1. If we substitute this indicator variable as it is in the above template, it would result in a question, “In how many cities, is the race of the students(%) of Asian greater than the average race of the students (%) of Asian taken over all cities ?”, which sounds unnatural. To avoid this, we asked in-house annotators to carefully paraphrase these indicator variables and question templates. The paraphrased version of the above example was “In how many cities, is the percentage of Asian students greater than the average percentage of Asian students taken over all cities ?'. Such paraphrasing for every question template and indicator variable required significant manual effort. Using this semi-automated process we were able to generate a total of $8,190,674$ questions. As shown in Table TABREF27, the answer could either be Yes/No or from Fixed Vocabulary, or Open Vocabulary. blackWe believe that this approach of creating questions based on templates extracted from complex human generated questions is a good middle ground between (i) the expensive and time consuming process of creating questions with the help of humans and (ii) the inexpensive and fast process of creating questions from very simple templates (as in FigureQA and DVQA).
Proposed Model
Existing models for VQA treat it as a multi-class classification problem, i.e., they assume that the answer needs to be picked from a fixed vocabulary. Such models work well for datasets such as DVQA where indeed all answers come from a fixed vocabulary (global or plot specific). However, in our dataset, for roughly 23% of Data Retrieval questions and 46% of Reasoning questions, the answers do not come from a fixed vocabulary but need to be computed by reasoning over one or more visual elements in the plots. To address such complex questions, we seek to leverage existing results on QA over tables BIBREF16. However, this requires the intermediate step of translating the plot image into a structured table (potentially similar to the one from which the plot was generated). To this end, we propose a pipelined method which separates the tasks of visual element detection and reasoning for QA. More specifically, our pipeline contains modules for (i) detecting visual elements in the plot such as bars, bounding boxes around axes label, etc. (ii) performing optical character recognition within these bounding boxes (iii) converting this data into a structured table and (iv) answering questions using this structured table.
Proposed Model ::: Visual Elements Detection (VED)
The data bearing elements of a plot are of 10 distinct classes: the title, the labels of the $x$ and $y$ axes, the tick labels or categories (e.g., countries) on the $x$ and $y$ axis, the data markers in the legend box, the legend names, and finally the bars and lines in the graph. Following existing literature (BIBREF17,BIBREF1), we refer to these elements as the visual elements of the graph. The first task is to extract all these visual elements by drawing bounding boxes around them and classifying them into the appropriate class. We can treat this as (i) an object detection + classification task or (ii) an instance segmentation task. If we take the former view then we can use existing object detection models such as RCNN, Fast-RCNN BIBREF18, YOLO BIBREF19, SSD BIBREF20, etc. and if we take the latter view we can use instance segmentation models such as Mask-RCNN. We tried both these approaches and found that instance segmentation based methods perform better for this task and hence we use Mask-RCNN as our VED module. Figure FIGREF21 shows the expected output of this stage with all the visual elements detected.
Proposed Model ::: Object Character Recognition (OCR)
Some of the visual elements such as title, legends, tick labels, etc. contain numeric and textual data. For extracting this data from within these bounding boxes, we use a state-of-the-art OCR model BIBREF21. More specifically, we crop the detected visual element to its bounding box, convert the cropped image into grayscale, resize and deskew it, and then pass it to an OCR module. Existing OCR modules perform well for machine-written English text, and indeed we found that a pre-trained OCR module works well on our dataset.
Proposed Model ::: Semi-Structured Information Extraction (SIE)
The next stage of extracting the data into a semi-structured table is best explained with an example shown in Figure FIGREF21. The desired output of SIE is shown in the table where the rows correspond to the ticks on the $x$-axis (1996, 1997, 1998, 1999), the columns correspond to the different elements listed in the legend (Brazil, Iceland, Kazakhstan, Thailand) and the $i$,$j$-th cell contains the value corresponding to the $x$-th tick and the $y$-th legend. The values of the $x$-tick labels and the legend names are available from the OCR module. The mapping of legend name to legend marker or color is done by associating a legend name to the marker or color whose bounding box is closest to the bounding box of the legend name. Similarly, we associate each tick label to the tick marker whose bounding box is closest to the bounding box of the tick label. For example, we associate the legend name Brazil to the color “Dark Cyan” and the tick label 1996 to the corresponding tick mark on the $x$-axis. With this we have the 4 row headers and 4 column headers, respectively. To fill in the 16 values in the table, there are again two smaller steps. First we associate each of the 16 bounding boxes of the 16 bars to their corresponding $x$-ticks and legend names. A bar is associated with an $x$-tick label whose bounding box is closest to the bounding box of the bar. To associate a bar to a legend name, we find the dominant color in the bounding box of the bar and match it with a legend name corresponding to that color. Second, we need to find the value represented by each bar. We extract the height of the bar using bounding box information from the VED module and then search for the $y$-tick labels immediately above and below that height. We then interpolate the value of the bar based on the values of these bounding ticks. With this we have the 16 values in the cells and thus have extracted all the information from the plot into a semi-structured table.
Proposed Model ::: Table Question Answering (QA)
The final stage of the pipeline is to answer questions on the semi-structured table. As this is similar to answering questions from the WikiTableQuestions dataset BIBREF22, we adopt the same methodology as proposed in BIBREF22. In this method, the table is converted to a knowledge graph and the question is converted to a set of candidate logical forms by applying compositional semantic parsing. These logical forms are then ranked using a log-linear model and the highest ranking logical form is applied to the knowledge graph to get the answer. Note that with this approach the output is computed by a logical form that operates on the numerical data. This avoids the limitation of using a small answer vocabulary for multi-class classification as is done in existing work on VQA. blackThere are some recent neural approaches for answering questions over semi-structured tables such as BIBREF23, BIBREF24 which take an ensemble of many models and outperform the relatively simpler model of BIBREF22 only by a small margin (1-2%). In the absence of an ensemble, these neural methods do not perform better than the method proposed in BIBREF22. To the best of our knowledge, there is one neural method BIBREF25 which performs better than BIBREF22 but the code for this model is not available which makes it hard to reproduce their results. Hence we chose the model of BIBREF22 for this stage which is relatively simpler and readily available.
Experiments
In this section we detail the data splits, baseline models, hyperparameter tunnig and evaluation metrics.
Experiments ::: Train-Valid-Test Splits
As mentioned earlier, by using different combinations of 841 indicator variables and 160 entities (years, countries, etc), we created a total of $224,377$ plots. Depending on the context and type of the plot, we instantiated the 74 templates to create meaningful (question,answer) pairs for each of the plots. The number of questions per plot varies from 17 to 44. We created train (70%), valid (15%) and test (15%) splits from this data. These statistics are summarized in Table TABREF27. The dataset, crowd-sourced questions and the model will be made available on the acceptance of this paper.
Experiments ::: Models Compared
We compare the performance of the following models:
- IMG-only: This is a simple baseline where we just pass the image through a VGG19 and use the embedding of the image to predict the answer from a fixed vocabulary.
- QUES-only: This is a simple baseline where we just pass the question through a LSTM and use the embedding of the question to predict the answer from a fixed vocabulary.
- SANBIBREF2: This is a state of the art VQA model which is an encoder-decoder model with a multi-layer stacked attention BIBREF26 mechanism. It obtains a representation for the image using a deep CNN and a representation for the query using LSTM. It then uses the query representation to locate relevant regions in the image and uses this to pick an answer from a fixed vocabulary.
- SANDYBIBREF1: This is the best performing model on the DVQA dataset and is a variant of SAN. Unfortunately, the code for this model is not available and the description in the paper was not detailed enough for us to reimplement it. Hence, we report the numbers for this model only on DVQA (from the original paper).
- VOES: This is our model as described in section SECREF3 which is specifically designed for questions which do not have answers from a fixed vocabulary.
- VOES-Oracle: blackThis is our model where the first three stages of VOES are replaced by an Oracle, i.e., the QA model answers questions on a table that has been generated using the ground truth annotations of the plot. With this we can evaluate the performance of the WikiTableQA model when it is not affected by the VED model's errors.
- SAN-VOES: Given the complementary strengths of SAN-VQA and VOES, we train a hybrid model with a binary classifier which given a question decides whether to use the SAN or the VOES model. The data for training this binary classifier is generated by comparing the predictions of a trained SAN model and a trained VOES model on the training dataset. For a given question, the label is set to 1 (pick SAN) if the performance of SAN was better than that of VOES. We ignore questions where there is a tie. The classifier is a simple LSTM based model which computes a representation for the question using an LSTM and uses this representation to predict 1/0. At test time, we first pass the question through this model and depending on the output of this model use SAN or VOES.
Experiments ::: Training Details
SAN: We used an existing implementation of SAN for establishing the initial baseline results. Image features are extracted from the last pooling layer of VGG19 network. Question features are the last hidden state of the LSTM. Both the LSTM hidden state and 512-d image feature vector at each location are transferred to a 1024-d vector by a fully connected layer, and added and passed through a non-linearity (tanh). The model was trained using Adam BIBREF27 optimizer with an initial learning rate of $0.0003$ and a batch size of 128 for 25000 iterations.
Proposed Pipeline: Of the four stages of the pipeline described in Section 4.2 only two require training, viz., Visual Elements Detection (VED) and Table Question Answering (QA). As mentioned earlier, for VED we train an instance segmentation model (MaskRCNN BIBREF28) using the bounding box annotations available in our dataset. We trained each model with a batch size of 32 for $200,000$ steps, beyond which no further training benefit was seen. We used RMSProp as the optimizer with an initial learning rate of $0.004$. For Table QA, we trained the model proposed in BIBREF22 using questions from our dataset and the corresponding blackground truth tables. Since this model is computationally expensive with a high training time, we could train it using only $400,000$ questions from our training set.
SAN-VOES: The binary question classifier in this hybrid model contains a 50-dimensional word embedding layer followed by an LSTM with 128 hidden units. The output of the LSTM is projected to 256 dimensions and this is then fed to the output layer. The model is trained for 10 epochs using RMSProp with an initial learning rate of 0.001. Accuracy on the validation set is $87.3\%$.
Experiments ::: Evaluation Metric
We used accuracy as the evaluation metric. Specifically, for textual answers (such as India, CO2, etc.) the model's output was considered to be correct only if the predicted answer exactly matches the true answer. However, for numeric answers which contain floating point values such an exact match is a very strict evaluation metric (for example, if the predicted answer is 10.5 and the true answer is 10 then in most cases it would be acceptable). Hence, we relax the accuracy measure to consider the predicted answer to be correct as long as it is within 5% of the correct answer.
Observations and Results
1. Evaluating models on PlotQA dataset (Table TABREF35): blackThe baselines IMG-only and QUES-only performed poorly with an accuracy of $14.84\%$ and $15.35\%$ respectively. We then evaluate SAN, VOES, VOES-Oracle, and SAN-VOES on each of the 9 question-answer types of the PlotQA dataset. SAN performs very well on Yes/No questions and moderately well on Fixed vocab. questions with a good baseline aggregate accuracy of 46.54%. SAN performs poorly on Open vocab. question, failing to answer almost all the 319,000 questions in this category. On the other hand, VOES fails to answer correctly any of the Yes/No questions, performs moderately well on Fixed vocab. questions, and answers correctly some of the hard Open vocab. questions. SAN-VOES combines the complementary strengths of SAN and VOES with the highest accuracy of 53.96%. In particular, the performance improves significantly for all Fixed Vocab. questions, while retaining the high accuracy of SAN on Yes/No questions and VOES' performance on Open vocab. There is a significant difference in the performance of VOES and VOES-Oracle across multiple question-answer types. This implies that the visual element detection in the VOES pipeline can be further improved.
2. Analysis of the VOES pipeline We analyze the performance of the of visual element detection (VED) and OCR.
- Table TABREF36 shows that the VED module performs reasonably well at an Intersection Over Union (IOU) of 0.5. For higher IOUs of 0.8 and 0.9, the accuracy falls drastically. For instance, at IOU of 0.9, dotlines are detected with an accuracy of under 5%. Clearly, such inaccuracies would lead to incorrect table generation and subsequent QA. This brings out an interesting difference between this task and other instance segmentation tasks where the margin of error is higher (where IOU of 0.5 is accepted). A small error in visual element detection as indicated by mAP scores of 80% is considered negligible for VQA tasks, however for PlotQA small errors can cause significantly misaligned table generation and subsequent QA. blackWe illustrate this with an example given in Figure 4. The predicted red box having an IOU of 0.58 estimates the bar size as 760 as opposed to ground truth of 680, significantly impacting downstream QA accuracy.
Retraining VED model with a higher IOU of 0.75 only resulted in a small increase in accuracy (last row). blackThus, inverting the plot generation function in going from the plot image to the structured table is a difficult CV task.
- In Table TABREF37 we evaluate the performance of the OCR module in standalone/oracle mode and pipeline mode. In the oracle mode, we feed ground truth boxes to the OCR model whereas in the pipeline model we perform OCR on the output of the VED module. We observe only a small drop in performance, which indicates that the OCR module is robust to the reduction in VED module's accuracy at higher IOU as it does not depend on the class label or the exact position of bounding boxes.
- In summary, a highly accurate VED for structured images is an open challenge to improve reasoning over plots.
3. Evaluating new models on the existing DVQA dataset (Table TABREF34): The proposed model VOES performs better than the existing models (SAN and SANDY-OCR) on DVQA. The higher performance of VOES in comparison to SAN (in contrast to the PlotQA results) suggests that the extraction of the structured table is more accurate on the DVQA dataset. This is because of the limited variability in the axis and tick labels and shorter length (one word only) of labels. The hybrid model, SAN-VOES, improves on the individual models and establishes a new SOTA result on DVQA.
Conclusion
We introduce the PlotQA dataset to reduce the gap between existing synthetic plot datasets and real-world plots and question templates. Analysis of an existing model for VQA for plots, SAN, on PlotQA reveals that it performs poorly for Open Vocabulary questions. We proposed the VOES model as a pipelined approach that combines visual element detection and OCR with QA over tables, specifically for the Open Vocabulary questions. A hybrid model, VOES-SAN, that combines SAN and VOES for different question types, generates state-of-the-art results on both the DVQA and PlotQA datasets. Detailed analysis of the VOES pipeline reveals the need for more accurate visual element detection to improve reasoning over plots. | Unanswerable |
a7f07ae48eed084c3144214228f4ecb72bc0a0e3 | a7f07ae48eed084c3144214228f4ecb72bc0a0e3_0 | Q: What models other than SAN-VOES are trained on new PlotQA dataset?
Text: Related Work
Datasets: Over the past few years several large scale datasets for Visual Question Answering have been released. These include datasets such as COCO-QA BIBREF3, DAQUAR BIBREF4, VQA BIBREF5, BIBREF6 which contain questions asked over natural images. On the other hand, datasets such as CLEVR BIBREF7 and NVLR BIBREF8 contain complex reasoning based questions on synthetic images having 2D and 3D geometric objects. There are some datasets BIBREF9, BIBREF10 which contain questions asked over diagrams found in text books but these datasets are smaller and contain multiple-choice questions. FigureSeer BIBREF11 is another dataset which contains images extracted from research papers but this is also a relatively small (60,000 images) dataset. Further, FigureSeer focuses on answering questions based on line plots as opposed to other types of plots such as bar charts, scatter plots, etc. as seen in FigureQA BIBREF0 and DVQA BIBREF1.
Models: The availability of the above mentioned datasets has facilitated the development of complex end-to-end neural network based models (BIBREF2, BIBREF12, BIBREF13, BIBREF14, BIBREF15). These end-to-end networks contain (a) encoders to compute a representation for the image and the question, (b) attention mechanisms to focus on important parts of the question and image, (c) interaction components to capture the interactions between the question and the image, and (d) a classification layer for selecting the answer from a fixed vocabulary. By design, these algorithms cannot be used in situations where the answer does not come from a fixed vocabulary but needs to be computed.
The PlotQA dataset
In this section, we describe the PlotQA dataset and the process to build it. Specifically, we discuss the four main stages, viz., (i) curating data such as year-wise rainfall statistics, country-wise mortality rates, etc., (ii) creating different types of plots with a variation in the number of elements, legend positions, fonts, etc., (iii) crowd-sourcing to generate questions, and (iv) extracting templates from the crowd-sourced questions and instantiating these templates using appropriate phrasing suggested by human annotators.
The PlotQA dataset ::: Data Collection and Curation
We considered online data sources such as World Bank Open Data , Open Government Data , Global Terrorism Database , etc. which contain statistics about various indicator variables such as fertility rate, rainfall, coal production, etc. across years, countries, districts, etc. We crawled data from these sources to extract different variables whose relations could then be plotted (for example, rainfall v/s years across countries, or movie v/s budget, or carbohydrates v/s food_item and so on). Some statistics about the crawled data are of interest. There are a total of 841 unique indicator variables (CO2 emission, Air Quality Index, Fertility Rate, Revenue generated by taxes, etc.) with 160 unique entities (cities, states, districts, countries, movies, food items, etc.). The data ranges from 1960 to 2016, though not all indicator variables have data items for all years. The data contains positive integers, floating point values, percentages, and values on a linear scale. These values range from 0 to $3.50\mathrm {e+}15$.
The PlotQA dataset ::: Plot Generation
We included 3 different types of plots in this dataset, viz., bar plots, line plots and scatter plots. Within bar plots, we have grouped them by orientation as either horizontal or vertical. Within the data sources we explored, we did not find enough data to create certain other types of plots such as Venn diagrams and pie charts which are used in specific settings. We also do not consider composite plots such as Pareto charts which have line graphs on top of bar graphs. Lastly, all the plots in our dataset contain only 2-axes. Figure FIGREF9 shows one sample of each plot type in PlotQA. Each of these plots can compactly represent 3-dimensional data. For instance, in Figure FIGREF9, the plot compares the indicator variable diesel prices across years for different countries. To enable the development of supervised modules for various sub-tasks we provide bounding box annotations for legend boxes, legend names, legend markers, axes titles, axes ticks, bars, lines, and plot title. By using different combination of indicator variables and entities (years, countries, etc.) we created a total of $224,377$ plots.
To ensure that there is enough variety in the plots, we randomly chose the following parameters: grid lines (present/absent), font size, notation used for tick labels (scientific-E notation or standard notation), line style (solid, dashed, dotted, dash-dot), marker styles for marking data points (asterisk, circle, diamond, square, triangle, inverted triangle), position of legends (bottom-left, bottom-centre, bottom-right, center-right, top-right), and colors for the lines and bars from a set of 73 different colors. The number of discrete elements on the $x$-axis varies from 2 to 12. Similarly, the number of entries in the legend box varies from 1 to 4. In other words, in the case of line plots, the number of lines varies from 1 to 4 and in the case of grouped bars the number of bars grouped on a single $x$-tick varies from 1 to 4. For example, for the plots in Figure FIGREF9, the number of discrete elements on the $x$-axis is 6 and the number of legend names (i.e., number of lines) is 3.
The PlotQA dataset ::: Sample Question Collection by Crowd-sourcing
Since the underlying data of the PlotQA dataset is much richer in comparison to FigureQA and DVQA, we found it necessary to ask a wider set of annotators to create questions over these plots. However, creating questions for all the plots in our dataset would have been prohibitively expensive. We sampled $1,400$ plots across different types and asked workers on Amazon Mechanical Turk to create questions for these plots. We showed each plot to 5 different workers resulting in a total of $7,000$ questions. We specifically instructed the workers to ask complex reasoning questions which involved reference to multiple plot elements in the plots. We also gave the workers a list of simple questions such as “Is this a vertical bar graph?”, “What is the title of the graph?”, “What is the value of coal production in 1993?” and asked them to not create such questions as we had already created such questions using hand designed templates. We paid the workers $0.1\$$ for each question.
The PlotQA dataset ::: Question Template Extraction & Instantiation
We manually analyzed the questions collected by crowdsourcing and divided them into a total of 74 templates (including the simple templates that we had manually designed as mentioned earlier). These templates were divided into 3 question categories. These question categories along with a few sample templates are shown below. We refer the reader to the Supplementary material for further details.
Structural Understanding: These are questions about the overall structure of the plot and do not require any quantitative reasoning. Examples: “How many different coloured bars are there?”, “How are the legend labels stacked?”.
Data Retrieval: These questions seek data item for a single element in the plot. Examples: “What is the number of tax payers in Myanmar in 2015?”, “How many bars are there on the 4th tick from the top?”.
Reasoning: These questions either require numeric reasoning over multiple plot elements or a comparative analysis of different elements of the plot, or a combination of both to answer the question. Examples: “In which country is the number of threatened bird species minimum?”, “What is the median banana production?”, “What is the difference between the number of deaths in Bulgaria and Cuba in the year 2005?”, “In how many years, is the rice production greater than the average rice production over all years?”.
We abstracted the questions into templates such as “In how many $<$plural form of X_label$>$, is the $<$Y_label$>$ of/in $<$legend_label$>$ greater than the average $<$Y_label$>$ of/in $<$legend_label$>$ taken over all $<$plural form of X_label$>$?”. We could then generate multiple questions for each template by replacing X_label, Y_label, legend_label, etc. by indicator variables, years, cities etc. from our curated data. However, this was a tedious task requiring a lot of manual intervention. For example, consider the indicator variable “Race of students” in Figure FIGREF1. If we substitute this indicator variable as it is in the above template, it would result in a question, “In how many cities, is the race of the students(%) of Asian greater than the average race of the students (%) of Asian taken over all cities ?”, which sounds unnatural. To avoid this, we asked in-house annotators to carefully paraphrase these indicator variables and question templates. The paraphrased version of the above example was “In how many cities, is the percentage of Asian students greater than the average percentage of Asian students taken over all cities ?'. Such paraphrasing for every question template and indicator variable required significant manual effort. Using this semi-automated process we were able to generate a total of $8,190,674$ questions. As shown in Table TABREF27, the answer could either be Yes/No or from Fixed Vocabulary, or Open Vocabulary. blackWe believe that this approach of creating questions based on templates extracted from complex human generated questions is a good middle ground between (i) the expensive and time consuming process of creating questions with the help of humans and (ii) the inexpensive and fast process of creating questions from very simple templates (as in FigureQA and DVQA).
Proposed Model
Existing models for VQA treat it as a multi-class classification problem, i.e., they assume that the answer needs to be picked from a fixed vocabulary. Such models work well for datasets such as DVQA where indeed all answers come from a fixed vocabulary (global or plot specific). However, in our dataset, for roughly 23% of Data Retrieval questions and 46% of Reasoning questions, the answers do not come from a fixed vocabulary but need to be computed by reasoning over one or more visual elements in the plots. To address such complex questions, we seek to leverage existing results on QA over tables BIBREF16. However, this requires the intermediate step of translating the plot image into a structured table (potentially similar to the one from which the plot was generated). To this end, we propose a pipelined method which separates the tasks of visual element detection and reasoning for QA. More specifically, our pipeline contains modules for (i) detecting visual elements in the plot such as bars, bounding boxes around axes label, etc. (ii) performing optical character recognition within these bounding boxes (iii) converting this data into a structured table and (iv) answering questions using this structured table.
Proposed Model ::: Visual Elements Detection (VED)
The data bearing elements of a plot are of 10 distinct classes: the title, the labels of the $x$ and $y$ axes, the tick labels or categories (e.g., countries) on the $x$ and $y$ axis, the data markers in the legend box, the legend names, and finally the bars and lines in the graph. Following existing literature (BIBREF17,BIBREF1), we refer to these elements as the visual elements of the graph. The first task is to extract all these visual elements by drawing bounding boxes around them and classifying them into the appropriate class. We can treat this as (i) an object detection + classification task or (ii) an instance segmentation task. If we take the former view then we can use existing object detection models such as RCNN, Fast-RCNN BIBREF18, YOLO BIBREF19, SSD BIBREF20, etc. and if we take the latter view we can use instance segmentation models such as Mask-RCNN. We tried both these approaches and found that instance segmentation based methods perform better for this task and hence we use Mask-RCNN as our VED module. Figure FIGREF21 shows the expected output of this stage with all the visual elements detected.
Proposed Model ::: Object Character Recognition (OCR)
Some of the visual elements such as title, legends, tick labels, etc. contain numeric and textual data. For extracting this data from within these bounding boxes, we use a state-of-the-art OCR model BIBREF21. More specifically, we crop the detected visual element to its bounding box, convert the cropped image into grayscale, resize and deskew it, and then pass it to an OCR module. Existing OCR modules perform well for machine-written English text, and indeed we found that a pre-trained OCR module works well on our dataset.
Proposed Model ::: Semi-Structured Information Extraction (SIE)
The next stage of extracting the data into a semi-structured table is best explained with an example shown in Figure FIGREF21. The desired output of SIE is shown in the table where the rows correspond to the ticks on the $x$-axis (1996, 1997, 1998, 1999), the columns correspond to the different elements listed in the legend (Brazil, Iceland, Kazakhstan, Thailand) and the $i$,$j$-th cell contains the value corresponding to the $x$-th tick and the $y$-th legend. The values of the $x$-tick labels and the legend names are available from the OCR module. The mapping of legend name to legend marker or color is done by associating a legend name to the marker or color whose bounding box is closest to the bounding box of the legend name. Similarly, we associate each tick label to the tick marker whose bounding box is closest to the bounding box of the tick label. For example, we associate the legend name Brazil to the color “Dark Cyan” and the tick label 1996 to the corresponding tick mark on the $x$-axis. With this we have the 4 row headers and 4 column headers, respectively. To fill in the 16 values in the table, there are again two smaller steps. First we associate each of the 16 bounding boxes of the 16 bars to their corresponding $x$-ticks and legend names. A bar is associated with an $x$-tick label whose bounding box is closest to the bounding box of the bar. To associate a bar to a legend name, we find the dominant color in the bounding box of the bar and match it with a legend name corresponding to that color. Second, we need to find the value represented by each bar. We extract the height of the bar using bounding box information from the VED module and then search for the $y$-tick labels immediately above and below that height. We then interpolate the value of the bar based on the values of these bounding ticks. With this we have the 16 values in the cells and thus have extracted all the information from the plot into a semi-structured table.
Proposed Model ::: Table Question Answering (QA)
The final stage of the pipeline is to answer questions on the semi-structured table. As this is similar to answering questions from the WikiTableQuestions dataset BIBREF22, we adopt the same methodology as proposed in BIBREF22. In this method, the table is converted to a knowledge graph and the question is converted to a set of candidate logical forms by applying compositional semantic parsing. These logical forms are then ranked using a log-linear model and the highest ranking logical form is applied to the knowledge graph to get the answer. Note that with this approach the output is computed by a logical form that operates on the numerical data. This avoids the limitation of using a small answer vocabulary for multi-class classification as is done in existing work on VQA. blackThere are some recent neural approaches for answering questions over semi-structured tables such as BIBREF23, BIBREF24 which take an ensemble of many models and outperform the relatively simpler model of BIBREF22 only by a small margin (1-2%). In the absence of an ensemble, these neural methods do not perform better than the method proposed in BIBREF22. To the best of our knowledge, there is one neural method BIBREF25 which performs better than BIBREF22 but the code for this model is not available which makes it hard to reproduce their results. Hence we chose the model of BIBREF22 for this stage which is relatively simpler and readily available.
Experiments
In this section we detail the data splits, baseline models, hyperparameter tunnig and evaluation metrics.
Experiments ::: Train-Valid-Test Splits
As mentioned earlier, by using different combinations of 841 indicator variables and 160 entities (years, countries, etc), we created a total of $224,377$ plots. Depending on the context and type of the plot, we instantiated the 74 templates to create meaningful (question,answer) pairs for each of the plots. The number of questions per plot varies from 17 to 44. We created train (70%), valid (15%) and test (15%) splits from this data. These statistics are summarized in Table TABREF27. The dataset, crowd-sourced questions and the model will be made available on the acceptance of this paper.
Experiments ::: Models Compared
We compare the performance of the following models:
- IMG-only: This is a simple baseline where we just pass the image through a VGG19 and use the embedding of the image to predict the answer from a fixed vocabulary.
- QUES-only: This is a simple baseline where we just pass the question through a LSTM and use the embedding of the question to predict the answer from a fixed vocabulary.
- SANBIBREF2: This is a state of the art VQA model which is an encoder-decoder model with a multi-layer stacked attention BIBREF26 mechanism. It obtains a representation for the image using a deep CNN and a representation for the query using LSTM. It then uses the query representation to locate relevant regions in the image and uses this to pick an answer from a fixed vocabulary.
- SANDYBIBREF1: This is the best performing model on the DVQA dataset and is a variant of SAN. Unfortunately, the code for this model is not available and the description in the paper was not detailed enough for us to reimplement it. Hence, we report the numbers for this model only on DVQA (from the original paper).
- VOES: This is our model as described in section SECREF3 which is specifically designed for questions which do not have answers from a fixed vocabulary.
- VOES-Oracle: blackThis is our model where the first three stages of VOES are replaced by an Oracle, i.e., the QA model answers questions on a table that has been generated using the ground truth annotations of the plot. With this we can evaluate the performance of the WikiTableQA model when it is not affected by the VED model's errors.
- SAN-VOES: Given the complementary strengths of SAN-VQA and VOES, we train a hybrid model with a binary classifier which given a question decides whether to use the SAN or the VOES model. The data for training this binary classifier is generated by comparing the predictions of a trained SAN model and a trained VOES model on the training dataset. For a given question, the label is set to 1 (pick SAN) if the performance of SAN was better than that of VOES. We ignore questions where there is a tie. The classifier is a simple LSTM based model which computes a representation for the question using an LSTM and uses this representation to predict 1/0. At test time, we first pass the question through this model and depending on the output of this model use SAN or VOES.
Experiments ::: Training Details
SAN: We used an existing implementation of SAN for establishing the initial baseline results. Image features are extracted from the last pooling layer of VGG19 network. Question features are the last hidden state of the LSTM. Both the LSTM hidden state and 512-d image feature vector at each location are transferred to a 1024-d vector by a fully connected layer, and added and passed through a non-linearity (tanh). The model was trained using Adam BIBREF27 optimizer with an initial learning rate of $0.0003$ and a batch size of 128 for 25000 iterations.
Proposed Pipeline: Of the four stages of the pipeline described in Section 4.2 only two require training, viz., Visual Elements Detection (VED) and Table Question Answering (QA). As mentioned earlier, for VED we train an instance segmentation model (MaskRCNN BIBREF28) using the bounding box annotations available in our dataset. We trained each model with a batch size of 32 for $200,000$ steps, beyond which no further training benefit was seen. We used RMSProp as the optimizer with an initial learning rate of $0.004$. For Table QA, we trained the model proposed in BIBREF22 using questions from our dataset and the corresponding blackground truth tables. Since this model is computationally expensive with a high training time, we could train it using only $400,000$ questions from our training set.
SAN-VOES: The binary question classifier in this hybrid model contains a 50-dimensional word embedding layer followed by an LSTM with 128 hidden units. The output of the LSTM is projected to 256 dimensions and this is then fed to the output layer. The model is trained for 10 epochs using RMSProp with an initial learning rate of 0.001. Accuracy on the validation set is $87.3\%$.
Experiments ::: Evaluation Metric
We used accuracy as the evaluation metric. Specifically, for textual answers (such as India, CO2, etc.) the model's output was considered to be correct only if the predicted answer exactly matches the true answer. However, for numeric answers which contain floating point values such an exact match is a very strict evaluation metric (for example, if the predicted answer is 10.5 and the true answer is 10 then in most cases it would be acceptable). Hence, we relax the accuracy measure to consider the predicted answer to be correct as long as it is within 5% of the correct answer.
Observations and Results
1. Evaluating models on PlotQA dataset (Table TABREF35): blackThe baselines IMG-only and QUES-only performed poorly with an accuracy of $14.84\%$ and $15.35\%$ respectively. We then evaluate SAN, VOES, VOES-Oracle, and SAN-VOES on each of the 9 question-answer types of the PlotQA dataset. SAN performs very well on Yes/No questions and moderately well on Fixed vocab. questions with a good baseline aggregate accuracy of 46.54%. SAN performs poorly on Open vocab. question, failing to answer almost all the 319,000 questions in this category. On the other hand, VOES fails to answer correctly any of the Yes/No questions, performs moderately well on Fixed vocab. questions, and answers correctly some of the hard Open vocab. questions. SAN-VOES combines the complementary strengths of SAN and VOES with the highest accuracy of 53.96%. In particular, the performance improves significantly for all Fixed Vocab. questions, while retaining the high accuracy of SAN on Yes/No questions and VOES' performance on Open vocab. There is a significant difference in the performance of VOES and VOES-Oracle across multiple question-answer types. This implies that the visual element detection in the VOES pipeline can be further improved.
2. Analysis of the VOES pipeline We analyze the performance of the of visual element detection (VED) and OCR.
- Table TABREF36 shows that the VED module performs reasonably well at an Intersection Over Union (IOU) of 0.5. For higher IOUs of 0.8 and 0.9, the accuracy falls drastically. For instance, at IOU of 0.9, dotlines are detected with an accuracy of under 5%. Clearly, such inaccuracies would lead to incorrect table generation and subsequent QA. This brings out an interesting difference between this task and other instance segmentation tasks where the margin of error is higher (where IOU of 0.5 is accepted). A small error in visual element detection as indicated by mAP scores of 80% is considered negligible for VQA tasks, however for PlotQA small errors can cause significantly misaligned table generation and subsequent QA. blackWe illustrate this with an example given in Figure 4. The predicted red box having an IOU of 0.58 estimates the bar size as 760 as opposed to ground truth of 680, significantly impacting downstream QA accuracy.
Retraining VED model with a higher IOU of 0.75 only resulted in a small increase in accuracy (last row). blackThus, inverting the plot generation function in going from the plot image to the structured table is a difficult CV task.
- In Table TABREF37 we evaluate the performance of the OCR module in standalone/oracle mode and pipeline mode. In the oracle mode, we feed ground truth boxes to the OCR model whereas in the pipeline model we perform OCR on the output of the VED module. We observe only a small drop in performance, which indicates that the OCR module is robust to the reduction in VED module's accuracy at higher IOU as it does not depend on the class label or the exact position of bounding boxes.
- In summary, a highly accurate VED for structured images is an open challenge to improve reasoning over plots.
3. Evaluating new models on the existing DVQA dataset (Table TABREF34): The proposed model VOES performs better than the existing models (SAN and SANDY-OCR) on DVQA. The higher performance of VOES in comparison to SAN (in contrast to the PlotQA results) suggests that the extraction of the structured table is more accurate on the DVQA dataset. This is because of the limited variability in the axis and tick labels and shorter length (one word only) of labels. The hybrid model, SAN-VOES, improves on the individual models and establishes a new SOTA result on DVQA.
Conclusion
We introduce the PlotQA dataset to reduce the gap between existing synthetic plot datasets and real-world plots and question templates. Analysis of an existing model for VQA for plots, SAN, on PlotQA reveals that it performs poorly for Open Vocabulary questions. We proposed the VOES model as a pipelined approach that combines visual element detection and OCR with QA over tables, specifically for the Open Vocabulary questions. A hybrid model, VOES-SAN, that combines SAN and VOES for different question types, generates state-of-the-art results on both the DVQA and PlotQA datasets. Detailed analysis of the VOES pipeline reveals the need for more accurate visual element detection to improve reasoning over plots. | IMG-only, QUES-only, SAN, SANDY, VOES-Oracle, VOES |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.